This fixes some subtle JavaScript exceptions we've been getting in
zulipchat.com, caused by the system bot realm there not being "zulip"
interacting with get_cross_realm_users.
This should help protect us from future issues with the way that
`bulk_get_users` does caching.
It's likely that we'll want to further restructure `bulk_get_users` to
not have this base_query code path altogether (since it's kinda
buggy), but I'm going to defer that for a time when we have another
user.
The previous implementation had a subtle caching bug: because it was
sharing its cache with the `get_user_profile_by_email` cache, if a
user happened to have an email in that cache, we'd return it, even
though that user didn't match `base_query`.
This causes `get_cross_realm_users` to no longer have a problematic
caching bug.
Hides URL if the message content == image url so that sending gifs or
images feels less cluttered. Uses the url_to_a() function to generate
the expected url string for matching.
Fixes#7324.
Appends "Test: " text to some tests to make changes to the image preview
rendering. In the future, if the message is only a link to an image,
the link will be hidden.
We include ERROR_BOT in this set, even though it's not technically
cross-realm (it just lives in the admin realm).
This code path does not correctly handle emails that correspond to
multiple accounts (because `get_system_bot` does not). Since it's
intended to only be used by system bots, we add an appropriate
assertion to ensure it is only used for system bots.
This was causing problems, because internal_send_message assumes that
there is a unique user (across all realms) with the given email
address (which is sorta required to support cross-realm bot messages
the way it does).
With this change, it now, in practice, only sends cross-realm bot
messages.
We now ignore payloads where payload['push']['changes'] is empty,
because an empty push doesn't really convey any useful information.
I couldn't find a way to replicate the action that would generate
such a payload, so I took one of our existing payloads and editted
out payload['push']['changes'] myself, so this payload is not
authentic.
Previously, this was a ValidationError, but that doesn't really make
sense, since this condition reflects an actual bug in the code.
Because this happened to be our only test coverage the ValidationError
catch on line 84 of registration.py, we add nocoverage there for now.
This buggy logic from e1686f427c had
broken do-destroy-rebuild-test-database.
Now that we're not just trying to add the Recipient objects for every
user on the system here to profiles_by_id, we also shouldn't be
processing every Recipeint object on the system. The fix is simple:
because of the patch we got merged into Django upstream,
recipients_to_create actually has the object IDs added to the
Recipient objects passed into Recipient.objects.bulk_create.
This was missed in manual testing, since it only broke `populate_db
--test-suite`.
An Integration object doesn't need access to the context dict used
to render its doc.md, since the context dict is just passed directly to
render_markdown_path.
Previously, when rendering a single integration, we tacked on the
following information to the context dict that was redundant:
* An OrderedDict containing all of the Integration objects for
all integrations.
* An OrderedDict containing all of the integration categories.
The context dict for rendering a particular integration doc would
contain 4 OrderedDicts, 2 for categories, 2 for Integration objects
because of how many times add_integrations_context had been called.
This was very wasteful, since an Integration object doesn't need
to access any other Integration object (or itself for that matter)
to render its documentation. This commit adds a function that
allows us to only pass in the context values that are necessary.
This is checked for in the caller of OurAuthenticationForm, which
meant this code was never run. But it is worth having an assertion
here to catch any possible regressions.
Structurally, the main change here is replacing the `clean_username`
function, which would get called when one accessed
self.cleaned_data['username'] with code in the main `clean` function.
This is important because only in `clean` do we have access to the
`realm` object.
Since I recently added full test coverage on this form, we know each
of the major cases have a test; the error messages are unchanged.
This deletes the old mock-covered test for this, which was mostly
useless. We have a much less messy test, which we extend to provide
the same test coverage the old one did.
While the result was the same before, this makes it more obvious.
This fixes a bug where, when a user is unsubscribed from a stream,
they might have unread messages on that stream leak. While it might
seem to be a minor problem, it can cause significant problems for
computing the `unread_msgs` data structures, since it means we need to
add an extra filter for whether the user is still subscribed, either
in the backend or in the UI.
Fixes#7095.
This code path was only required because we had remote_user set as a
positional argument here, and thus we'd be running this auth backend's
code when actually using another auth backend (due to how Django auth
backends are selected based on argument signature).
Inorder to provide more explicit error messages I have merged the
`emoji_code_is_valid()` and `emoji_name_is_valid()` functions into
`check_emoji_code_consistency()` and `check_emoji_name_consistency()`
respectively.
The installation admin is not the right person to get support requests from
deactivated users, regardless of the situation.
Also updates the wording to be a bit more concise.
This often can cause minor caching problems.
Obviously, it'd be better if we had access to the AST and thus could
do this rule for UserProfile objects in general.
This was basically rewritten by tabbott, because the code is a lot
cleaner after just rewriting the ZulipPasswordResetForm code to no
longer copy the model of the original Django version.
Fixes#4733.
Payloads that don't have a payload['object_attributes']['action']
attribute are generated when GitLab sends a test payload to verify
if the webhook was set up successfully. In this case, we should
send a message notifying that the webhook was configured
successfully.
Instead of populating the context dict with integration-specific
information in render_markdown_path, we now do that in
zerver.views.integrations.integration_doc instead.
Fixes#7401.
Tweaked by tabbott to use cast to handle the typing issues here.
This adds tests for a new more cases. Some were already covered
elsewhere in the codebase, but it feels best for LoginTest to fully
cover OurAuthenticationForm.
The character ">" now only starts a blockquote if the resulting
blockquote would be non-empty. Thus, by itself, ">" is now
interpreted literally by bugdown, fixing #687. The message
with contents consisting of ">>>" is now parsed as a doubly
(not triply) nested blockquote with contents ">". Properly
formed blockquotes have identical behavior as before, but now
bugdown can no longer produce empty blockquotes as output.
Fixes#2886, #687.
Storage limititations are only set on the value of
a config entry, since this is the only user-accessible
part of the schema. Keys are statically set by each
embedded bot.
We don't have our linter checking test files due to ultra-long strings
that are often present in test output that we verify. But it's worth
at least cleaning out all the ultra-long def lines.
This endpoint will allow us to add/delete emoji reactions whose emoji
got renamed during various emoji infra changes. This was also a
required change for realm emoji migration.
This commit was tweaked significantly by tabbott for greater clarity
(with no changes to the actual logic).
When the RabbitMQ server disappears, we log errors like these:
```
Traceback (most recent call last):
File "./zerver/lib/queue.py", line 114, in json_publish
self.publish(queue_name, ujson.dumps(body))
File "./zerver/lib/queue.py", line 108, in publish
self.ensure_queue(queue_name, do_publish)
File "./zerver/lib/queue.py", line 88, in ensure_queue
if not self.connection.is_open:
AttributeError: 'NoneType' object has no attribute 'is_open'
During handling of the above exception, another exception occurred:
[... traceback of connection failure inside the retried self.publish()]
```
That's a type error -- a programming error, not an exceptional
condition from outside the program. Fix the programming error.
Also move the retry out of the `except:` block, so that if it also
fails we don't get the exceptions stacked on each other. This is a
new feature of Python 3 which is sometimes indispensable for
debugging, and which surfaced this nit in the logs (on Python 2 we'd
never see the AttributeError part), but in some cases it can cause a
lot of spew if care isn't taken.
This commit helps reduce clutter on the navigation sidebar.
Creates new directories and moves relevant files into them.
Modifies index.rst, symlinks, and image paths accordingly.
This commit also enables expandable/collapsible navigation items,
renames files in docs/development and docs/production,
modifies /tools/test-documentation so that it overrides a theme setting,
Also updates links to other docs, file paths in the codebase that point
to developer documents, and files that should be excluded from lint tests.
Note that this commit does not update direct links to
zulip.readthedocs.io in the codebase; those will be resolved in an
upcoming follow-up commit (it'll be easier to verify all the links
once this is merged and ReadTheDocs is updated).
Fixes#5265.
While fixing an issue related to email gateway messages not getting
rendered properly, I unknowingly introduced a bug in the markdown
engine updation code. This commit fixes it. The issue was that for
a realm having email gateway setup, updation of realm filters would
lead to the updation of only one of the markdown engines not both.
In remove_members_from_group_backend, we are passing user group to
remove_members_from_user_group. In remove_members_from_user_group,
expect user_group_id.
This fixes a regression in ae5ba7f4fd,
where Zulip would 500 if the newly added system bots didn't exist on
the server.
This also fixes a moderate size performance problem where we'd fetch 5
users from memcached or the database in a loop.
This fixes a regression in 25c669df52.
We were draining the queue in both the superclass and the subclass,
so by the time the subclass started processing events, there were
no events to process. Now the subclass properly uses the events
passed in from the superclass.
These are new:
new-user-bot
emailgateway
Our cross-realm bots are hard coded to have email addresses
in the `zulip.com` domain, and they're not part of ordinary
realms.
These have always been cross-realm, but new enforcement in the
frontend code of all messages having been sent by a known user means
that it's important to add these properly.
This restyles and rewords some of the emoji style section to look
better and fit it more with the current style guide.
Tweaked by tabbott to modify the historical migration rather than
adding a new one. This is OK because the emojiset choices text change
doesn't touch the database; it's just a Django Python code thing.
Also removed translation tags, since we don't need them for a set of
brand names.
The intended use of $$ is for inline expressions, not for multiline
ones; ```math is an acceptable alternative for the latter. Hence,
the $$-syntax for inline TeX no longer permits newlines within it.
This was also necessary for the next change to be sensible; namely
allowing for spaces around both $$ when crafting inline TeX instead of
forcing everything to be crammed together, e.g. $$x=7$$. In order to
avoid uninentionally creating inline expressions, the opening and
closing $$'s of an inline expression must now both exactly consist of
two dollar signs, no more and no less.
Fixes: #6488.
The os.mkdir call is straightforward and doesn't testing.
Workers relying on LoopQueueProcessingWorker are tested
with its consume method that exists solely for this purpose.
Previously, these push notification events were being generated, but
then ignored in handle_push_notification because there was no
user_message object.
This should help make it easier to add tests coverage for these queue
processors, since they now use a system more similar to the other
queue processors.
In python3 base64.b64decode() can take an ASCII string, and any
legit data will be ASCII. If you pass in non-ASCII data, the
function will properly throw a ValueError (verified in python3 shell).
>>> s = '안녕하세요'
>>> import base64
>>> base64.b64decode(s)
Traceback (most recent call last):
File "/srv/zulip-py3-venv/lib/python3.4/base64.py", line 37, in _bytes_from_decode_data
return s.encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/srv/zulip-py3-venv/lib/python3.4/base64.py", line 83, in b64decode
s = _bytes_from_decode_data(s)
File "/srv/zulip-py3-venv/lib/python3.4/base64.py", line 39, in _bytes_from_decode_data
raise ValueError('string argument should contain only ASCII characters')
ValueError: string argument should contain only ASCII characters
Generally emails are not written with markdown in mind and hence
sometimes render in strange ways. This commit fixes a particular
issue that was causing whitespace before paragraphs to be treated
as code block due to which email content was being rendered in a
box that scrolls in right direction a lot.
Fixes: #7045.
In templates/zerver/api/main.html, since the current context isn't
passed to render_markdown_path when rendering an article,
render_markdown_path doesn't have the context to render values such
as api_url. This commit makes sure that it does by passing a dict
called api_uri_context to render_markdown_path when rendering an
article.
This commit puts the guts of parse_usermessage_flags into
UserMessage.flags_list_for_flags, since it was slightly faster
than the old implementation and produced the same results.
(Both algorithms were super fast, actually.)
And then all callers use the model method now.
The logic to set search_fields was essentially the same for both
sides of the include_history conditional.
Now we have just one code block that sets search_fields, and we
can quickly short-circuit the loop when is_search is False.
This change affects realm_users and realm_non_active_users.
Note that we still send full avatar urls in realm_user/add
events, so apply_events has to do something mildly hacky to
turn the avatar_url to None in that case.
Fixing the event is probably not worth the trouble, as single
urls are not bandwidth hogs; we only need this optimization
for bulk data.
This change affects these values:
* page_params.avatar_url
* page_params.avatar_url_medium
It requires passing the client_gravatar flag through this
codepath:
* home_real
* do_events_register
* fetch_initial_state_data
* avatar_url
Seems like the more logical check. Also, the previous code makes it feel
like there is a potential vulnerability where one could get an email change
object in a realm where email changes are disabled, and then open that link
while logged in to a different realm.
While we're at it, remove the unnecessary check that the user is
logged in when clicking the confirmation link; that creates
unnecessary trouble for users who use multiple browsers.
Removes an assert, which at this point is there just for readability, since
the second argument to
get_object_from_key(confirmation_key, Confirmation.EMAIL_CHANGE)
ensures that the returned object is of the correct type.
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.