Most of the failures were due to parameters that are not intended to
be used by third-party code, so the correct fix for those was the set
intentionally_undocumented=True.
Fixes#12969.
Historically, Zulip's implementation of wildcard mentions never
triggered either email or push notifications, instead being limited to
desktop notifications and the "mentions" counter.
We fix this just by plumbing the "wildcard_mentioned" flag through our
system.
Implements much of
https://github.com/zulip/zulip/issues/6040#issuecomment-510157264.
We're also now ready to seriously work on #3750.
Apparently, our edit-message events did not guarantee that the outer
wrapper dictionary, which is intended to be unique for each client,
was unique for every client (instead only ensuring it was unique for
each user).
This led to clients unexpectedly getting last_event_id validation
errors in this code path when a user had multiple connected clients,
because the linear ordering of event IDs within a given queue was
corrupted.
In fd2a63b049, we accidentally fixed
this issue with a different set of userdata events, without fixing the
edit-message event bug. This commit fixes the remaining issue.
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Apparently, our edit-message events did not guarantee that the outer
wrapper dictionary, which is intended to be unique for each client,
was unique for every client (instead only ensuring it was unique for
each user).
This led to clients unexpectedly getting last_event_id validation
errors in this code path when a user had multiple connected clients,
because the linear ordering of event IDs within a given queue was
corrupted.
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
The previous version of this commit (commit
e00d4be6d5, #12888) had to be reverted
(commit b86c5cc490) because it was
missing the `to_dict`/`from_dict` migration code.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This fixes two issues:
* The syntax check logic we had for zerver.tornado.autoreload would
end up clearing _reload_hooks if one of the files that had changed
was zerver.tornado.autoreload itself (because we'd had re-imported
the current module), which could be incredibly confusing when trying
to test the autoreload logic. It seems better to just not run the
syntax check for syntax errors in this file.
Similarly, because reloading event_queue.py would destroy the state
in the queues, we avoid that as well.
* We make sure to flush stdout after running and reload hooks, to make
sure their output reaches the user.
We were apparently not running our own forked Tornado autoreload
library when adding reload hooks, which meant that our autoreload
hooks didn't run at all.
This fixes an issue that made dump_event_queues never run and thus the
local development environment difficult to use for testing event queues.
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This makes it more convenient for developers to set very short values
for this (e.g. 1 minute) for the purposes of testing/debugging; there
aren't obvious problems with letting users set short values for this.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
This should make it possible for there to safely be multiple Tornado
processes running on different ports on the same system.
It may also fix a rare race bug in development, where previously, it
was possible for the Tornados processes for Casper and the main
development server to interfere; I haven't investigated whether this
was a real bug or not, but now those two services will use independent
Tornado files.
We still need to add something to direct traffic between the different
Tornado processes.
Previously, these timer accounting functions could be easily mistaken
for referring to starting/stopping the request. By adding timer to
the name, we make the code easier for the casual observer to read and
understand.
Historically, queue_json_publish had a special third argument that was
basically its default mock behavior in the test suite. We've been
migrating away from that model, because it was confusing and resulted
in poor test coverage of our queue worker code paths; this was one of
the last holdouts.
As it turns out, we don't exercise this code path in a way that
impacts tests much; the main downside of this change is a likely small
penalty to performance of the full test suite when sending private
messages.
As part of our effort to change the data model away from each user
having a single API key, we're eliminating the couple requests that
were made from Django to Tornado (as part of a /register or home
request) where we used the user's API key grabbed from the database
for authentication.
Instead, we use the (already existing) internal_notify_view
authentication mechanism, which uses the SHARED_SECRET setting for
security, for these requests, and just fetch the user object using
get_user_profile_by_id directly.
Tweaked by Yago to include the new /api/v1/events/internal endpoint in
the exempt_patterns list in test_helpers, since it's an endpoint we call
through Tornado. Also added a couple missing return type annotations.
This adds support to the event queue system for triggering
missed-message notifications (whether push or email) to support the
stream push notifications feature.
Previously, maybe_enqueue_notifications had this very subtle logic,
where it set the notice variable only inside the block for push
notifications, but then also used it inside the block for email
notifications.
This "worked", because previously the conditions for push
notifications were always true if the conditions for email
notifications were, but the code was unnecessarily confusing. The
only good reason to write it this way is if build_offline_notification
was expensive; in fact, the most expensive thing it does is calling
time.time(), so that reason does not apply here.
This was further confusing, in that in the original logic, we relied
on the fact that push notification code path edited the "notice"
dictionary for further processing.
Instead, we just call it separately and setup the data separately in
each code path.
This data will be required for correctly implementing the upcoming
stream_push_notify feature; it also helps support cleaning up the code
for the existing stream mentions logic.
We've for a long time been plagued by run-dev.py needing to be
restarted every time one does a rebase that has merge conflicts,
because the Tornado process restarts itself into a syntax error and
crashes.
This fixes the Tornado autoreload process to check explicitly for
whether files actually syntax-check before trying to actually reload
the Tornado process to run that code.
There are a few things that are a bit janky:
* Ideally, this would go into Tornado upstream
* We removed the `_watched_files` feature, which we weren't using.
* Ideally, we'd use something other than `importlib.reload` that just
does the syntax-check without adjusting the state within our current
process.
Fixes#4351.
We're never going to add tests for this block, which is fundamentally
well-tested code from Django with a since line changed which is hard
to screw up (long-polling will not work at all without it). The hope
is to remove it entirely and replace it with a cleaner monkey-patch,
but until then, unit tests for it would be redundant.