Previously, if both EmailAuthBackend and LDAPAuthBackend were enabled,
LDAP users could set a password using EmailAuthBackend and continue to
use that password, even if their LDAP account was later deactivated.
That configuration wasn't supported at all before, so this doesn't fix
a pre-existing security issue, but now that we're making that a valid
configuration, we need to cover this case.
This should make it easier for us to iterate on a less-dense Zulip.
We create two classes on body, less_dense_mode and more_dense_mode, so
that it's easy as we refactor to separate the two concepts from things
like colors that are independent.
API users, particularly bots, can now send a field
called "widget_content" that will be turned into
a submessage for the web app to look at. (Other
clients can still rely on "content" to be there,
although it's up to the bot author to make the
experience good for those clients as well.)
Right now widget_content will be a JSON string that
encodes a "zform" widget with "choices." Our first
example will be a trivia bot, where users will see
something like this:
Which fruit is orange in color?
[A] orange
[B] blackberry
[C] strawberry
The letters will be turned into buttons on the webapp
and have canned replies.
This commit has a few parts:
- receive widget_content in the request (simply
validating that it's a string)
- parse the JSON in check_message and deeply
validate its structure
- turn it into a submessage in widget.py
This commit adds a view which will be used to process login requests,
adds an AuthenticationTokenForm so that we can use TextField widget for
tokens, and activates two factor authentication code path whenever user
tries to login.
This should significantly improve the user experience for creating
additional accounts on zulipchat.com.
Currently, disabled in production pending some work on visual styling.
This query was incorreclty not checking whether a user was deactivated
before managing their subscriptions.
This isn't an important bug, but should prevent some weird corner
cases (like trying to send a notification PM to a deactivated user,
which fails).
We send add events on upload, update events when sending a message
referencing it, and delete updates on removal.
This should make it possible to do real-time sync for the attachments
UI.
Based in part on work by Aastha Gupta.
These decorators will be part of the process for disabling access to
various features for guest users.
Adding this decorator to the subscribe endpoint breaks the guest users
test we'd just added for the subscribe code path; we address this by
adding a more base-level test on filter_stream_authorization.
This removes a check on invite_only, that should have been a check on
history_public_to_subscribers. In addition to fixing a bug for zephyr
realms, it also makes "more topics" work correctly for realms using
the new settings for stream history being public to subscribers.
We haven't seen significant traffic from the legacy desktop app in
over a year, and users using it get a warning to upgrade since last
summer, so it's probably OK to stop providing special fonts for it.
This commit adds a new field history_public_to_subscribers to the
Stream model, which serves a similar function to the old
settings.PRIVATE_STREAM_HISTORY_FOR_SUBSCRIBERS; we still use that
setting as the default value for new streams to avoid breaking
backwards-compatibility for those users before we are ready with an
actual UI for users to choose directly.
This also comes with a migration to set the value of the new field for
existing streams with an algorithm matching that used at runtime.
With significant changes by Tim Abbott.
This is an initial part of our efforts on #9232.
The handlebars error message is just for the manual development
environment; this prevents the state of compiling handlebars templates
from run-dev.py from potentially causing the unit tests to fail.
Add realm setting to set time limit for message deleitng.
Set default value of message_content_delete_limit_seconds
to 600 seconds(10 min).
Thanks to Shubham Dhama for rebasing and reworking this. Some final
edits also done by Tim Abbott.
Fixes#7344.
This should make it easier to find the templates that are actually
part of the core webapp, instead of having them all mixed together
with the portico pages.
The main change here is to send a proper confirmation link to the
frontend in the `confirm_continue_registration` code path even if the
user didn't request signup, so that we don't need to re-authenticate
the user's control over their email address in that flow.
This also lets us delete some now-unnecessary code: The
`invalid_email` case is now handled by HomepageForm.is_valid(), which
has nice error handling, so we no longer need logic in the context
computation or template for `confirm_continue_registration` for the
corner case where the user somehow has an invalid email address
authenticated.
We split one GitHub auth backend test to now cover both corner cases
(invalid email for realm, and valid email for realm), and rewrite the
Google auth test for this code path as well.
Fixes#5895.
By moving all of the logic related to the is_signup flag into
maybe_send_to_registration, we make the login_or_register_remote_user
function quite clean and readable.
The next step is to make maybe_send_to_registration less of a
disaster.
The code in maybe_send_to_registration incorrectly used the
`get_realm_from_request` function to fetch the subdomain. This usage
was incorrect in a way that should have been irrelevant, because that
function only differs if there's a logged-in user, and in this code
path, a user is never logged in (it's the code path for logged-out
users trying to sign up).
This this bug could confuse unit tests that might run with a logged-in
client session. This made it possible for several of our GitHub auth
tests to have a totally invalid subdomain value (the root domain).
Fixing that bug in the tests, in turn, let us delete a code path in
the GitHub auth backend logic in `backends.py` that is impossible in
production, and had just been left around for these broken tests.
This code path has actually been dead for a while (since
`invalid_subdomain` gets set to True only when `user_profile` is
`None`). We might want to re-introduce it later, but for now, we
eliminate it and the artificial test that provided it with test
coverage.
This is done mainly because this backend has the simplest code path
for calling login_or_register_remote_user, more than because we expect
this case to come up. It'll make it easier to write unit tests for
the `invalid_subdomain` corner case.
This is a mobile-specific endpoint used for logging into a dev server.
On mobile without this realm_uri it's impossible to send a login request
to the corresponding realm on the dev server and proceed further; we can
only guess, which doesn't work for using multiple realms.
Also rename the endpoint to reflect the additional data.
Testing Plan:
Sent a request to the endpoint, and inspected the result.
[greg: renamed function to match, squashed renames with data change,
and adjusted commit message.]
After some thinking, I don't think there's any actual value to doing
the ../ style relative links here, whereas there is actual harm from
the links being slightly broken in the current model. We fix this by
just using /#settings as the URL.
Fixes#8978.
For certain queries where both include_history and
use_first_unread_anchor are set to True, we were excluding
historical rows. Now we only use the use_first_unread_anchor
flag to filter rows that we use to find the anchor, without
having it filter the actual search results.
The bug went unreported for a long time, because it only
affected mobile users who had newly subscribed to streams.
Note that we make a small change to the test called
test_use_first_unread_anchor_with_muted_topics, which has
a very scary comment about being "arcane" and "be
absolutely sure you know what you're doing." I think it's
fine.
Also, the new test code would fail before this fix, so it
should help prevent future regressions.
Fixes#8958
This is a bit more than a pure refactor, because we duplicate a
chunk of code to calculate a query inside of
find_first_unread_anchor(), so we're doing a bit more work
than before.
We need this refactoring to start decoupling find_first_unread_anchor
from get_messages_backend for the case where include_history is
True. This will happen in a subsequent commit.
The only test that changes here is a direct test on
find_first_unread_anchor(). All other tests pass without
modification, and we have decent coverage on get_messages_backend.
We use an array now to build up the list of search operands and
then consolidate the special search handling after the loop (which
means setting the flag, putting two more columns in the query, and
using ' '.join to build the string).
We have a debugging statement for some obscure errors we get
when narrows have search terms. We now show all the narrow
operators. This isn't really to improve debugging; it's more
to make it easier in the next commit to extract a function
that would make search_term have to be passed back in a tuple.
But it shouldn't hurt debugging either.
Refactoring in this file had resulted in the logic for
html_settings_link being duplicated and extra logic being needed to
ensure these variables were set where they were needed.
This fixes subscriptions_html not being rendered properly in the /help
and /api pages, in addition to removing duplicate code.
This is a pure refactoring and just pulls the function out
to the top level of the module. (The prior commit extracted
it inside a larger function to make a nicer diff.)
The new name can_access_stream_history_by_name gets to the point of
what this function actually does. And passing in a user object lets
us define what this does based on the user subscribed.
This fixes a bug where the endpoint for editing bot users would allow
an organization administrator to edit the full name of a bot user.
A combination of this an another recently fixed bug made it possible
for this process to set a `bot_owner` for a non-bot user; so we also
include a migration to fix that for any users that might have had our
model invariants corrupted in that way.
This bot was basically a duplicate of NOTIFICATION_BOT for some
specific corner cases, and didn't add much value. It's better to just
eliminate it, which also removes some ugly corner cases around what
happens if the user account doesn't exist.
Add function in user-groups.py for getting member ids
for a group.
Update view to enforce checks for modifying user-groups.
Only admins and user group members can modify user-groups.
Applies the logic to allow community members to edit topics
of others' messages if this setting is True. Otherwise,
only administrators can update the topic of others' messages.
This logic includes a 24-hour time limit for community topic editing.
This commit asserts that parse_user_agent never returns None. The
RegEx will match any string, so that `match` is never None. This
brings test coverage of lib/user_agent.py to 100%. Changes were also
made in test/test_decorators.py and views/compatibility.py to reflect
that parse_user_agent cannot return None.
Improves: #7089.
Fixes: #8779.
It's possible that this won't work with some versions of the
third-party backend, but tabbott has tested carefully that it does
work correctly with the Apache basic auth backend in our test
environment.
In this commit we start to support redirects to urls supplied as a
'next' param for the following two backends:
* GoogleOAuth2 based backend.
* GitHubAuthBackend.
This commit migrates realm emoji to be addressed by their `id` rather
than their name. This fixes a long standing issue which was causing
an error on uploading an emoji with same name as a deactivated realm
emoji.
Fixes: #6977.
We now consistently set our query limits so that we get at
least `num_after` rows such that id > anchor. (Obviously, the
caveat is that if there aren't enough rows that fulfill the
query, we'll return the full set of rows, but that may be less
than `num_after`.) Likewise for `num_before`.
Before this change, we would sometimes return one too few rows
for narrow queries.
Now, we're still a bit broken, but in a more consistent way. If
we have a query that does not match the anchor row (which could
be true even for a non-narrow query), but which does match lots
of rows after the anchor, we'll return `num_after + 1` rows
on the right hand side, whether or not the query has narrow
parameters.
The off-by-one semantics here have probably been moot all along,
since our windows are approximate to begin with. If we set
num_after to 100, its just a rough performance optimization to
begin with, so it doesn't matter whether we return 99 or 101 rows,
as long as we set the anchor correctly on the subsequent query.
We will make the results more rigorous in a follow up commit.
We start to force downloads for the attachment files. We do this
for all files except images or pdf's. We would like images or pdf's
to open up in browser itself.
Tweaked by tabbott for comment clarity and correctness.
This will allow realm admins to remove others from private stream to
which the realm administrator is not subscribed; this is important for
managing those streams, because previously nobody could remove users
from private streams that didn't have any realm administrators
subscribed.
This will allow realm admins to access subscribers of unsubscribed
private stream. This is a preparatory commit for letting realm admins
remove those users.
This will allow realm admins to update the names and descriptions of
private streams even if they are not subscribed, which fixes the buggy
behavior that previously nobody could(!).
This generic function isolates the before/after logic that really
is independent of Message and doesn't need to clutter up
`get_messages_backend`. Also, introducing a new namespace
reduces some shadowing/mutation with variables like `query`.
It's a pure code move, with some very minor renaming (e.g.
inner_msg_id_col -> id_col).
If anchor is 0, there is no sense doing a before_query.
Likewise, if anchor is `LARGER_THAN_MAX_MESSAGE_ID`, there is
no sense doing an after_query.
We introduce variables called `need_before_query` and
`need_after_query` to enforce those conditions.
This also adds some comments explaining the fallthrough case
where neither query makes sense.
If use_first_unread_anchor is set and we don't have any unread
messages, then our anchor is effectively "positive infinity" and
we can streamline queries.
In the past we'd have clauses like `message_id <= 999999999999999`
in the query that were harmless but crufty.
We want to say `if num_after > 0` when we expect num_after to be
a positive integer. We don't want any confusion that we will
execute the blocks for values of -7 or None.
Apparently, we did essentially all the work to support showing full
topic history to newly subscribed users from a data flow perspective,
but didn't actually enable this feature by having the topic history
endpoint grant access to historical topics. This fixes that gap.
I'm not altogether happy with how the code and tests read for this
feature; the code itself has more duplication than I'd like, and the
tests do too, but it works.
"incorrect" here means rejected by a bot's validate_config() method.
A common scenario for this is validating API keys before the bot is
created. If validate_config() fails, the bot will not be created.
Add `translate_emoticons` to `prop_types` and `expected_keys`.
Furthermore, create a emoji-translating Markdown inline pattern.
Also use a JavaScript version of `translate_emoticons` and then use
this function during Markdown previews and as a preprocessor. This
is only needed for previews, because usually emoticon translation
happens on the backend after sending.
Add tests for emoticon translation, a settings UI, and a /help/ page
as well.
Tweaked by tabbott to fix various test failurse as well as how this
handles whitespace, requiring emoticons to not have adjacent
characters.
Fixes#1768.
This is necessary for mobile apps to do the right thing when only
RemoteUserBackend is enabled, namely, directly redirect to the
third-party SSO auth site as soon as the user enters the server URL
(no need to display a login form, since it'll be useless).
This got broken at some point when we moved around the context
processing logic for integrations/webhooks. Thankfully, the
context value for external_uri_scheme was only used in a couple of
our less popular integration docs. It should render perfectly now.
Previously, we used to raise an exception if the direct dev login code
path was attempted when:
* we were running under production environment.
* dev. login was not enabled.
Now we redirect to an error page and give an explanatory message to the
user.
Fixes#8249.
This uses an actual query to the backend to check if the subdomain is
available, using the same logic we would use to check when the
subdomain is in fact created.
This adds button under "Organization profile" settings, which
deactivates the organization and sends an "event" to all the
active user and log out them.
Fixes: #8212.
This may be helpful for some API clients, since it avoids them needed
to do somewhat messy post-processing on the results (the data was
always available via scanning for the first unread message in the result).
Fixes#6244.
From here on we start to authenticate uploaded file request before
serving this files in production. This involves allowing NGINX to
pass on these file requests to Django for authentication and then
serve these files by making use on internal redirect requests having
x-accel-redirect field. The redirection on requests and loading
of x-accel-redirect param is handled by django-sendfile.
NOTE: This commit starts to authenticate these requests for Zulip
servers running platforms either Ubuntu Xenial (16.04) or above.
Fixes: #320 and #291 partially.
It catches the `UserProfile.DoesNotExist` exception and
hence prevent internal server error.
Also remove option to select empty bot owner.
Fixes: #8334.
When the answer is False, this will allow the mobile app to show a
warning that push notifications will not work and the server admin
should set them up.
Based partly on Kunal's PR #7810. Provides the necessary backend API
for zulip/zulip-mobile#1507.
Users having only account in one realm will not be distracted by realm
name in subject lines of every email. Users who have multiple
accounts in realms can turn this setting on and receive a
corresponding realm name in email's subject.
Tweaked by tabbott to rebase and address a few small issues.
Fixes#5489.
This will let us defer configuring outbound email to the end of the
install procedure, so we can greatly simplify it by consolidating
several scripted steps.
The new flow could be simplified further by giving the user the full
form in the first place, rather than first a form for just their
email address and then a form with the other details. We'll leave
that improvement for a separate change.
Now, there's just one spot at the beginning of the function where we
inspect the string key the user gave us; and after that point, we not
only have validated that string but in fact are working from our own
record that it pointed to, not the string itself.
This simplifies the code a bit, e.g. by not repeatedly searching the
database for the key (and hoping everything agrees so that we keep
getting the same row), and it will simplify adding logic to inspect
row attributes like `presume_email_valid`.
There's no use case for presenting a key that's invalid; if we haven't
given the user a valid key, we needn't send them to a URL that
presents an invalid one. And the code is simpler to think about if
the only keys that can exist (after the validation at the top of the
function) are valid ones.
Apart from the case where creation_key is None, but invalid, and
settings.OPEN_REALM_CREATION is True so that we'd previously let the
invalid key slide, this is a pure refactor.
We'll replace this primarily with per-realm quotas (plus the simple
per-file limit of settings.MAX_FILE_UPLOAD_SIZE, 25 MiB by default).
We do want per-user quotas too, but they'll need some more management
apparatus around them so an admin has a practical way to set them
differently for different users. And the error handling in this
existing code is rather confused. Just clear this feature out
entirely for now; then we'll build the per-realm version more cleanly,
and then we can later add back per-realm quotas modelled after that.
The migration to actually remove the field is in a subsequent commit.
Based in part on work by Vishnu Ks (hackerkid).
This is a little cleaner in that the try/except blocks for
SMTPException are a lot narrower; and it'll facilitate an upcoming
change to sometimes skip sending mail.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This adds UI fields in the bot settings for specifying
configuration values like API keys for a bot. The names
and placeholder values for each bot's config fields are
fetched from the bot's <bot>.conf template file in the
zulip_bots package. This also adds giphy and followup
as embedded bots.
This commit adds a setting to limit creation of generic bots
to admins for realms that want that restriction. (Generic
bots, apart from being considered spammy on some realms,
have less locked down permissions than webhook bots).
Fixes#7066.
We no longer have a special UI setting and model
field ("emoji_alt_code") for saying users want text-only
emojis. We now instead make "text" be a fifth choice
for "emojiset".
Fixes#7406
There might be case that NOTIFICATION_BOT is none, so before sending stream
announce notification, check first if settings.NOTIFICATION_BOT is not none.
This reverts commit 620b2cd6e.
Contributors setting up a new development environment were getting
errors like this:
```
++ dirname tools/do-destroy-rebuild-database
[...]
+ ./manage.py purge_queue --all
Traceback (most recent call last):
[...]
File "/home/zulipdev/zulip/zproject/legacy_urls.py", line 3, in <module>
import zerver.views.streams
File "/home/zulipdev/zulip/zerver/views/streams.py", line 187, in <module>
method_kwarg_pairs: List[FuncKwargPair]) -> HttpResponse:
File "/usr/lib/python3.5/typing.py", line 1025, in __getitem__
tvars = _type_vars(params)
[...]
File "/usr/lib/python3.5/typing.py", line 277, in _get_type_vars
for t in types:
TypeError: 'ellipsis' object is not iterable
```
The issue appears to be that we're using the `typing` module from the
3.5 stdlib, rather than the `typing=3.6.2` in our requirements files,
and that doesn't understand the `Callable[..., HttpResponse]` that
appears in the definition of `FuncKwargPair`.
Revert for now to get provision working again; at least one person
reports that reverting this sufficed. We'll need to do more testing
before putting this change back in.
Eventually this check for the realm will be done in get_object_from_key
itself. Rewriting this to fit the pattern in get_object_from_key.
No change to behavior.
Commit d4ee3023 and its parent have the history behind this code.
Since d4ee3023^, all new PreregistrationUser objects, except those for
realm creation, have a non-None `realm`. Since d4ee3023, any legacy
PreregistrationUsers, with a `realm` of None despite not being for
realm creation, are treated as expired. Now, we ignore them
completely, and remove any that exist from the database.
The user-visible effect is to change the error message for
registration (or invitation) links created before d4ee3023^ to be
"link does not exist", rather than "link expired".
This change will at most affect users upgrading straight from 1.7 or
earlier to 1.8 (rather than from 1.7.1), but I think that's not much
of a concern (such installations are probably long-running
installations, without many live registration or invitation links).
[greg: tweaked commit message]
For example, this means that if a user already has an account on one
realm and they try to make an account on another by hitting "Sign in
with Google" (rather than following the little "Register" link to a
"Sign up with Google" button instead), they'll get to make an account
instead of getting an error.
Until very recently, if the user existed on another realm, any attempt
to register with that email address had to fail in the end, so this
logic gave the user a useful error message early. We introduced it in
c23aaa178 "GitHub: Show error on login page for wrong subdomain"
back in 2016-10 for that purpose. No longer! We now support reusing
an email on multiple realms, so we let the user proceed instead.
This function's interface is kind of confusing, but I believe when its
callers use it properly, `invalid_subdomain` should only ever be true
when `user_profile` is None -- in which case the revised
`invalid_subdomain` condition in this commit can never actually fire,
and the `invalid_subdomain` parameter no longer has any effect. (At
least some unit tests call this function improperly in that respect.)
I've kept this commit to a minimal change, but it would be a good
followup to go through the call sites, verify that, eliminate the use
of `invalid_subdomain`, then remove it from the function entirely.
[Modified by greg to (1) keep `USERNAME_FIELD = 'email'`,
(2) silence the corresponding system check, and (3) ban
reusing a system bot's email address, just like we do in
realm creation.]
As we migrate to allow reuse of the same email with multiple realms,
we need to replace the old "no email reuse" validators. Because
stealing the email for a system bot would be problematic, we still ban
doing so.
This commit only affects the realm creation logic, not registering an
account in an existing realm.
The one thing this bit of logic is used for is to decide whether
there's an existing user which is a mirror dummy that we should
activate. This change causes us to ignore such an existing user if
it's on some other realm, and go straight into `do_create_user`.
This completes the last commit's work to fix CVE-2017-0910, applying
to any invite links already created before the fix was deployed. With
this change, all new-user registrations must match an explicit realm
in the PreregistrationUser row, except when creating a new realm.
[greg: rewrote commit message]
We would allow a user with a valid invitation for one realm to use it
on a different realm instead. On a server with multiple realms, an
authorized user of one realm could use this (by sending invites to
other email addresses they control) to create accounts on other
realms. (CVE-2017-0910)
With this commit, when sending an invitation, we record the inviting
user's realm on the PreregistrationUser row; and when registering a
user, we check that the PregistrationUser realm matches the realm the
user is trying to register on. This resolves CVE-2017-0910 for
newly-sent invitations; the next commit completes the fix.
[greg: rewrote commit message]
Previously, this was a ValidationError, but that doesn't really make
sense, since this condition reflects an actual bug in the code.
Because this happened to be our only test coverage the ValidationError
catch on line 84 of registration.py, we add nocoverage there for now.
An Integration object doesn't need access to the context dict used
to render its doc.md, since the context dict is just passed directly to
render_markdown_path.
Previously, when rendering a single integration, we tacked on the
following information to the context dict that was redundant:
* An OrderedDict containing all of the Integration objects for
all integrations.
* An OrderedDict containing all of the integration categories.
The context dict for rendering a particular integration doc would
contain 4 OrderedDicts, 2 for categories, 2 for Integration objects
because of how many times add_integrations_context had been called.
This was very wasteful, since an Integration object doesn't need
to access any other Integration object (or itself for that matter)
to render its documentation. This commit adds a function that
allows us to only pass in the context values that are necessary.
This often can cause minor caching problems.
Obviously, it'd be better if we had access to the AST and thus could
do this rule for UserProfile objects in general.
Instead of populating the context dict with integration-specific
information in render_markdown_path, we now do that in
zerver.views.integrations.integration_doc instead.
Fixes#7401.
Tweaked by tabbott to use cast to handle the typing issues here.
This endpoint will allow us to add/delete emoji reactions whose emoji
got renamed during various emoji infra changes. This was also a
required change for realm emoji migration.
This commit was tweaked significantly by tabbott for greater clarity
(with no changes to the actual logic).
In templates/zerver/api/main.html, since the current context isn't
passed to render_markdown_path when rendering an article,
render_markdown_path doesn't have the context to render values such
as api_url. This commit makes sure that it does by passing a dict
called api_uri_context to render_markdown_path when rendering an
article.
This commit puts the guts of parse_usermessage_flags into
UserMessage.flags_list_for_flags, since it was slightly faster
than the old implementation and produced the same results.
(Both algorithms were super fast, actually.)
And then all callers use the model method now.
The logic to set search_fields was essentially the same for both
sides of the include_history conditional.
Now we have just one code block that sets search_fields, and we
can quickly short-circuit the loop when is_search is False.
Seems like the more logical check. Also, the previous code makes it feel
like there is a potential vulnerability where one could get an email change
object in a realm where email changes are disabled, and then open that link
while logged in to a different realm.
While we're at it, remove the unnecessary check that the user is
logged in when clicking the confirmation link; that creates
unnecessary trouble for users who use multiple browsers.
Removes an assert, which at this point is there just for readability, since
the second argument to
get_object_from_key(confirmation_key, Confirmation.EMAIL_CHANGE)
ensures that the returned object is of the correct type.
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This gets used when we call `process_client`, which we generally do at
some kind of login; and in particular, we do in the shared auth
codepath `login_or_register_remote_user`. Add a decorator to make it
easy, and use it on the various views that wind up there.
In particular, this ensures that the `query` is some reasonable
constant corresponding to the view, as intended. When not set, we
fall back in `update_user_activity` on the URL path, but in particular
for `log_into_subdomain` that can now contain a bunch of
request-specific data, which makes it (a) not aggregate properly, and
(b) not even fit in the `CHARACTER VARYING(50)` database field we've
allotted it.
I remember being really confused by this function in the past, and I finally
figured it out. It should be removed, and the dev_url added by
00-realm-creation should call a function that just gets the confirmation_key
from outbox like all of the backend tests, but until then this comment
should help.
This change:
* Prevents weird potential attacks like taking a valid confirmation link
(say an unsubscribe link), and putting it into the URL of a multiuse
invite link. I don't know of any such attacks one could do right now, but
reasoning about it is complicated.
* Makes the code easier to read, and in the case of confirmation/views.py,
exposes something that needed refactoring anyway (USER_REGISTRATION and
INVITATION should have different endpoints, and both of those endpoints
should be in zerver/views/registration, not this file).
tsearch_extras returns search offsets in bytes but our highlight
function treated them as character offsets. Added a check to subtract
extra bytes if the tsearch search backend is being used.
Fixes#4084.
Fixes#7021.
The "subdomain" label is redundant, to the extent it's even
accurate -- this is really just the URL we want to display,
which may or may not involve a subdomain. Similarly "external".
The former `external_api_path_subdomain` was never a path -- it's a
host, followed by a path, which together form a scheme-relative URL.
I'm not quite convinced that value is actually the right thing in
2 of the 3 places we use it, but fixing that can start by giving an
accurate name to the thing we have.
This setting isn't documented at all, and I believe nobody has used it
since the end of api.zulip.com in 2016. So we get to complete the
cleanup of this logic.
The HelpView class will render a directory as markdown with an index HTML
page. This however can also be used for other generics and applied to
the API pages as well, so change the class to a generic class and
specify the path templates and names.
Tweaked by tabbott and Eeshan Garg.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
Almost all callers to do_create_user were trying to
create active users, except for one test. The
active=False codepath was kind of broken (things
like sending welcome messages had sort of undefined
behavior there), so instead of trying to maintain it,
we just update the one test (`test_people`) to flip the
`is_active` flag manually.
Fixes#7197
The cookie mechanism only works when passing the login token to a
subdomain. URLs work across domains, which is why they're the
standard transport for SSO on the web. Switch to URLs.
Tweaked by tabbott to add a test for an expired token.
Most of these have more to do with authentication in general than with
registering a new account. `create_preregistration_user` could go
either way; we move it to `auth` so we can make the imports go only in
one direction.
Lets administrators view a list of open(unconfirmed) invitations and
resend or revoke a chosen invitation.
There are a few changes that we can expect for the future:
* It is currently possible to invite an email that you have already
invited, it might make sense to change this behavior.
* Resend currently sends an invite reminder instead of resending the
original invite, this is because 'custom_body' was not stored when
the first invite was sent.
Tweaked in various minor ways, primarily in the backend, by tabbott,
mostly for style consistency with the rest of the codebase.
Fixes: #1180.
Tweaked by tabbott to have the field before the invitation is
completed be called invite_as_admins, not invited_as_admins, for
readability.
Fixes#6834.
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Wherever possible, we always want to move checking for error
conditions to the views code, so that we don't need to worry about
handling failures with (in this case) a user that's half-created
because a DefaultStreamGroup doesn't exist.
This effectively implements the feature of default stream groups,
except for a UI, nice styling, etc.
Note that we're careful to not have this do anything in an
organization that doesn't have any default stream groups.
These are just instances that jumped out at me while working on the
subdomains code, mostly while grepping for get_subdomain call sites.
I haven't attempted a comprehensive search, and there are likely
still others left.
Adds support to add "Embedded bot" Service objects. This service
handles every embedded bot.
Extracted from "Embedded bots: Add support to add embedded bots from
UI" by Robert Honig.
Tweaked by tabbott to be disabled by default.
While it's totally fine to put a leading '.' before the cookie domain
for normal hostnames and browsers will just strip them, if you're
using an IP address, it doesn't work, because .127.0.0.1 (for example)
is just invalid, and the cookie won't be set.
This fixes an issue where after installing with an IP address, realm
creation would end with being stuck at a blank page for
/accounts/login/subdomain/.
If an organization doesn't have the EmailAuthBackend (which allows
password auth) enabled, then our password reset form doesn't do
anything, so we should hide it in the UI.
While our recent changing to hide /register means we don't need a nice
pretty error message here, eventually we'll want to clean up the error
message.
Fixes#7047.
This new function extractions the bit of logic we use after creating a
new user account to log them in and send them to the home page,
without emailing the user about their new login.
This fixes a problem we've seen where LDAP users were not getting this
part of the onboarding process, and a similar problem for human users
created via the API.
Ideally, we would have put these fixes in process_new_human_user, but
that would cause import loop problems.
Clients fetching messages can now specify that they are able
to compute their avatar, and if they set client_gratavar to
True in the request (w/our normal encoding scheme), then the
backend will not compute it, and the payload will be smaller.
The fix starts with get_messages_backend. The flag gets
passed down through these functions:
* MessageDict.post_process_dicts.
* MessageDict.set_sender_avatar.
We also fix up the callers for post_process_dicts to explicitly
pass in the client_gravatar path, but for now they all just hard
code the value to False.
In the UI we use locale as the code for the language. Django expects
language code. For Simplified Chinese, 'zh_Hans' is the locale which
maps to a directaory under static/locale, and 'zh-hans' is the language
code, which is used in settings.LANGUAGES setting found in Django.
Message.get_raw_db_rows is moved to MessageDict, since its
implementation details are highly coupled to other methods
in MessageDict.
And then sew_messages_and_reactions comes along for the
ride.
We eventually want to move Reaction.get_raw_db_rows to there
as well.
Introduce MessageDict.post_process_dicts() will allow us
the ability to do the following:
* use less memory in the cache for repeated data
* prevent cache invalidation
* format data according to different client needs
The first use of this function is pretty inconsequential, but
it sets us up for more consequential changes.
In this commit we defer the MessageDict.hydrate_recipient_info
step until after we pull data out of the cache. This impacts
cache size as follows:
* streams - negligibly bigger
* PMs/huddles - slimmer due to not needing to repeat
sender data like email/full_name
Again, the main point of this change is to start setting up
the infrastructure to do post-processing.
We now use a `.values` query to get just the fields we need
in order to fulfill '/json/users' requests.
The main benefit is that we don't do O(N) queries for bot
owners, but we also have less data on UserProfile to process.
On receiving a request for deleting a reaction, just check if such
a reaction exists or not. If it exists then just delete the reaction
otherwise send an error message that such a reaction doesn't exist.
It doesn't make sense to check whether an emoji name is valid or not.
Since subscribed_to_stream is only doing an id lookup
on the Stream model to find out if a user is subscribed to
a stream, there's no reason to require a full Stream object.
It's currently the case that all callers do have full Stream
objects handy to pass in to this function, but it's still a
good practice to have functions only ask for objects that they
need.
The original "quality score" was invented purely for populating
our password-strength progress bar, and isn't expressed in terms
that are particularly meaningful. For configuration and the core
accept/reject logic, it's better to use units that are readily
understood. Switch to those.
I considered using "bits of entropy", defined loosely as the log
of this number, but both the zxcvbn paper and the linked CACM
article (which I recommend!) are written in terms of the number
of guesses. And reading (most of) those two papers made me
less happy about referring to "entropy" in our terminology.
I already knew that notion was a little fuzzy if looked at
too closely, and I gained a better appreciation of how it's
contributed to confusion in discussing password policies and
to adoption of perverse policies that favor "Password1!" over
"derived unusual ravioli raft". So, "guesses" it is.
And although the log is handy for some analysis purposes
(certainly for a graph like those in the zxcvbn paper), it adds
a layer of abstraction, and I think makes it harder to think
clearly about attacks, especially in the online setting. So
just use the actual number, and if someone wants to set a
gigantic value, they will have the pleasure of seeing just
how many digits are involved.
(Thanks to @YJDave for a prototype that the code changes in this
commit are based on.)
Since the REALMS_HAVE_SUBDOMAINS migration in development, we've had
scattered reports of users who found trying to open 127.0.0.1:9991
resulting in a redirect loop between zulipdev.com:9991,
zulipdev.com:9991/devlogin, and zulipdev.com:9991/devlogin/, and back
to zulipdev.com:9991.
We fix this temporarily through a small cleanup, which is to have that
last step in the loop send the user to the subdomain where they're
actually logged in, zulip.zulipdev.com:9991.
There's more to be done before this system will make sense, though.
This endpoint is part of the old tutorial, which we've removed, and
has some security downsides as well.
This includes a minor refactoring of the tests.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
We want to convert stream names to stream ids as close
to the "edges" of our system as possible, so we let our
caller do the work of finding the stream id for a stream
narrow.
There is no reason for either render_incoming_message() or
render_markdown() to require full UserProfile objects just to
triage alert words.
By only asking for user_ids, we save extra queries in two
callpaths and we make it easier to start using user_ids in
do_send_messages().
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.
Use this new variable to determine if the user already exists while
doing registration. While doing login through GitHub if we press
*Go back to login*, we pass email using email variable. As a result,
the login page starts showing the "User already exists error" if we
don't change the variable.
Previously, Zulip's server logs would not show which user or client
was involved in login or user registration actions, which made
debugging more annoying than it needed to be.
This is mostly pure code extraction.
It also removes some dead code in update_muted_topic, where
were updating muted_topics spuriously before calling
do_update_muted_topic.
Unlike creating a stream, there's really no reason one would want to
call the function to create a realm while uncertain whether that realm
already existed.
For filters like has:link, where the web app doesn't necessarily
want to guess whether incoming messages meet the criteria of the
filter, the server is asked to query rows that match the query.
Usually these queries are search queries, which have fields for
content_matches and subject_matches. Our logic was handling those
correctly.
Non-search queries were throwing an exception related to tuple
unpacking. Now we recognize when those fields are absent and
do the proper thing.
There are probably situations where the web app should stop hitting
this endpoint and just use its own filters. We are making the most
defensive fix first.
Fixes#6118
This causes `upgrade-zulip-from-git`, as well as a no-option run of
`tools/build-release-tarball`, to produce a Zulip install running
Python 3, rather than Python 2. In particular this means that the
virtualenv we create, in which all application code runs, is Python 3.
One shebang line, on `zulip-ec2-configure-interfaces`, explicitly
keeps Python 2, and at least one external ops script, `wal-e`, also
still runs on Python 2. See discussion on the respective previous
commits that made those explicit. There may also be some other
third-party scripts we use, outside of this source tree and running
outside our virtualenv, that still run on Python 2.
Before this change, server searches for both
`is:mentioned` and `is:alerted` would return all messages
where the user is specifically mentioned (but not
at-all mentions).
Now we follow the JS semantics:
is:mentioned -- all mentions, including wildcards
is:alerted -- has an alert word
Here is one relevant JS snippet:
} else if (operand === 'mentioned') {
return message.mentioned;
} else if (operand === 'alerted') {
return message.alerted;
And here you see that `mentioned` is OR'ed over both mention flags:
message.mentioned = convert_flag('mentioned') || convert_flag('wildcard_mentioned');
The `alerted` flag on the JS side is a simple mapping:
message.alerted = convert_flag('has_alert_word');
Fixes#5020
This adds the authors to the Zulip repository on GitHub from
/authors/ along with re-styling the page to fit the same
aesthetic as /for/open-source/ and other product-pages.
The new endpoints are:
/json/mark_stream_as_read: takes stream name
/json/mark_topic_as_read: takes stream name, topic name
The /json/flags endpoint no longer allows streams or topics
to be passed in as parameters.
This function optimizes marking streams and topics as read,
by using UserMessage.where_unread(), which uses a partial
index on the "read" flag.
This also simplifies the code path for ordinary message
flag updates.
In order to keep 100% line coverage, I simplified the
logging in update_message_flags, so now all requests
will show the "actually" format.
This is an interim step toward creating dedicated endpoints
for marking streams/topics as reads, so we do error checking
with asserts for flag/operation, so we don't introduce a
temporary translation string.
This is mostly a pure code extraction, except that we now
disregard the `messages` option for stream/topic updates,
since the web app always passes in an empty list (and this
commit is really just an incremental step toward creating
new endpoints.)