Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Apparently, our edit-message events did not guarantee that the outer
wrapper dictionary, which is intended to be unique for each client,
was unique for every client (instead only ensuring it was unique for
each user).
This led to clients unexpectedly getting last_event_id validation
errors in this code path when a user had multiple connected clients,
because the linear ordering of event IDs within a given queue was
corrupted.
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
The previous version of this commit (commit
e00d4be6d5, #12888) had to be reverted
(commit b86c5cc490) because it was
missing the `to_dict`/`from_dict` migration code.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This fixes two issues:
* The syntax check logic we had for zerver.tornado.autoreload would
end up clearing _reload_hooks if one of the files that had changed
was zerver.tornado.autoreload itself (because we'd had re-imported
the current module), which could be incredibly confusing when trying
to test the autoreload logic. It seems better to just not run the
syntax check for syntax errors in this file.
Similarly, because reloading event_queue.py would destroy the state
in the queues, we avoid that as well.
* We make sure to flush stdout after running and reload hooks, to make
sure their output reaches the user.
We were apparently not running our own forked Tornado autoreload
library when adding reload hooks, which meant that our autoreload
hooks didn't run at all.
This fixes an issue that made dump_event_queues never run and thus the
local development environment difficult to use for testing event queues.
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This makes it more convenient for developers to set very short values
for this (e.g. 1 minute) for the purposes of testing/debugging; there
aren't obvious problems with letting users set short values for this.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
This should make it possible for there to safely be multiple Tornado
processes running on different ports on the same system.
It may also fix a rare race bug in development, where previously, it
was possible for the Tornados processes for Casper and the main
development server to interfere; I haven't investigated whether this
was a real bug or not, but now those two services will use independent
Tornado files.
We still need to add something to direct traffic between the different
Tornado processes.
Previously, these timer accounting functions could be easily mistaken
for referring to starting/stopping the request. By adding timer to
the name, we make the code easier for the casual observer to read and
understand.
Historically, queue_json_publish had a special third argument that was
basically its default mock behavior in the test suite. We've been
migrating away from that model, because it was confusing and resulted
in poor test coverage of our queue worker code paths; this was one of
the last holdouts.
As it turns out, we don't exercise this code path in a way that
impacts tests much; the main downside of this change is a likely small
penalty to performance of the full test suite when sending private
messages.
As part of our effort to change the data model away from each user
having a single API key, we're eliminating the couple requests that
were made from Django to Tornado (as part of a /register or home
request) where we used the user's API key grabbed from the database
for authentication.
Instead, we use the (already existing) internal_notify_view
authentication mechanism, which uses the SHARED_SECRET setting for
security, for these requests, and just fetch the user object using
get_user_profile_by_id directly.
Tweaked by Yago to include the new /api/v1/events/internal endpoint in
the exempt_patterns list in test_helpers, since it's an endpoint we call
through Tornado. Also added a couple missing return type annotations.
This adds support to the event queue system for triggering
missed-message notifications (whether push or email) to support the
stream push notifications feature.
Previously, maybe_enqueue_notifications had this very subtle logic,
where it set the notice variable only inside the block for push
notifications, but then also used it inside the block for email
notifications.
This "worked", because previously the conditions for push
notifications were always true if the conditions for email
notifications were, but the code was unnecessarily confusing. The
only good reason to write it this way is if build_offline_notification
was expensive; in fact, the most expensive thing it does is calling
time.time(), so that reason does not apply here.
This was further confusing, in that in the original logic, we relied
on the fact that push notification code path edited the "notice"
dictionary for further processing.
Instead, we just call it separately and setup the data separately in
each code path.
This data will be required for correctly implementing the upcoming
stream_push_notify feature; it also helps support cleaning up the code
for the existing stream mentions logic.
We've for a long time been plagued by run-dev.py needing to be
restarted every time one does a rebase that has merge conflicts,
because the Tornado process restarts itself into a syntax error and
crashes.
This fixes the Tornado autoreload process to check explicitly for
whether files actually syntax-check before trying to actually reload
the Tornado process to run that code.
There are a few things that are a bit janky:
* Ideally, this would go into Tornado upstream
* We removed the `_watched_files` feature, which we weren't using.
* Ideally, we'd use something other than `importlib.reload` that just
does the syntax-check without adjusting the state within our current
process.
Fixes#4351.
We're never going to add tests for this block, which is fundamentally
well-tested code from Django with a since line changed which is hard
to screw up (long-polling will not work at all without it). The hope
is to remove it entirely and replace it with a cleaner monkey-patch,
but until then, unit tests for it would be redundant.
We only use this in the direct management command, and it involves
some autoreload process setup stuff that we probably don't want to do
in our unit tests regardless.
This is basically a simple fix, where we consistently set
`flags` to an empty array when we pass it around. The history
here is that we had kind of a nasty bug from setting it to
`None`, which only showed up in the somewhat obscure circumstance
of somebody subscribing to all stream events in our API.
Fixes#7921
A `None` value is not properly handled in this function, which
indicates some lack of testing or a recent regression we don't
understand. We were getting lots of tracebacks from this line
of code on our test server:
mentioned = 'mentioned' in flags and 'read' not in flags
This method was new in Tornado 4.0. It saves us from having to get
the time ourselves and do the arithmetic -- which not only makes the
code a bit shorter, but also easier to get right. Tornado docs (see
http://www.tornadoweb.org/en/stable/ioloop.html) say we should have
been getting the time from `ioloop.time()` rather than hardcoding
`time.time()`, because the loop could e.g. be running on the
`time.monotonic()` clock.
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
This test helper method duplicated a bunch of logic in
`zerver/worker/queue_processors.py` in a specialized fashion for the
tests. Now that we're using `call_consume_in_tests` in this code
path, we don't need it.
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
This removes the utterly unnecessary `triggers` dict (which always was
a dict with exactly one value True) in favor of a single field,
'trigger'.
Inspired by Kunal Gupta's work in #6659.
While the missedmessage_hook logic originally did a reasonably good
job of avoiding double-sending notifications, there was a corner case
it didn't handle, namely a user who had been presence-idle when a
message was sent and became also event-queue-idle as well within the
next 10 minutes. For those users, they got a notification at message
send time, and the missedmessage_hook would deliver it a second time.
We fix this by just checking the conveniently available push_notified
and email_notified variables that indicate whether the message already
had a notification triggered.
Fixes#7031.
This makes tests of queue processors more realistic,
by adding a parameter to `queue_json_publish` that
calls a queue's consumer function if accessed in a test.
Fixes part of #6542.
This field would get overwritten with an improper value when
we looped over multiple clients, due to not making full copies
of the message dictionary. This failure would be somewhat
random depending on how clients were ordered in the loop.
The only consumers of this field were the mobile app and the
apply-events-to-unread-counts logic. Both of these will now
use `flags` instead.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
This ensures that as we expand the logic for under what circumstances
email and push notifications should be sent, we can be confident about
this code path always doing the right thing.
This fixes a problem introduced in the recent refactoring where
`triggers` would not be set correctly when a push or email
notification was triggered by missedmessage_hook.
Fixes#6612.
Now, the two code paths do the same thing for this check.
It seems like there may be more work to do here, in that
wildcard_mentioned messages seem to not be eligible for sending
email/push notifications. We probably want to add some logic there
for the user doing the mention to control whether or not it does.
This is a nonfunctional refactor, designed primarily to make it
simpler to extend this code path when we later add support for
controlling whether email notifications go out on stream messages.
Previously, due to a logic bug, this feature would also send email
notifications for all messages on the stream, which is definitely not
the intent. The recent refactoring we just did makes the logic more
obvious.
This creates a lot of logging noise, and also causes confusion
for new contributors when something isn't working as they expect
and they aren't sure if this message is normal or an error.
Usually a small minority of users are eligible to receive missed
message emails or mobile notifications.
We now filter users first before hitting UserPresence to find idle
users. We also simply check for the existence of recent activity
rather than borrowing the more complicated data structures that we
use for the buddy list.
Because the Redis client returns exclusively bytes -- even for
hash keys -- even on Python 3, the test `'response' in status`
was always returning false, and the line that tries to decode
as JSON was never running, so we were passing `response`
through as a `bytes` object encoding some JSON.
I'm not sure what the impact of this bug was, and in particular
whether something downstream would have fudged it to make up for
this error.
This fixes the original issue that #5598 was the root cause of; when
the user returns to a Zulip browser tab after they've been idle past
the timeout (10 min, per IDLE_EVENT_QUEUE_TIMEOUT_SECS), we now
correctly reload the page even if they're using Zulip in German or
another non-English language where we have a translation for the
relevant error message.
The one purpose this exception was serving was to carry a message
in `msg`. We can do that with `JsonableError`, and as a bonus replace
a repetition of the familiar "'result': 'error', ..." JSON pattern
with a call to a common implementation.
Also wrap the error messages for translation -- we hadn't been doing
that, oops. Our linter notices that issue now that it's the familiar
JsonableError class.
There's one other potential change in behavior here: this
except-clause might now catch a JsonableError raised from some other
code. That seems like a bonus, if so; the handler isn't doing
anything actually specific to this code, and the more exceptions it
successfully turns into proper error responses to the client and lines
in the log, the better.
This provides the main infrastructure for fixing #5598. From here,
it's a matter of on the one hand upgrading exception handlers -- the
many except-blocks in the codebase that look for JsonableError -- to
look beyond the string `msg` and pass on the machine-readable full
error information to their various downstream recipients, and on the
other hand adjusting places where we raise errors to take advantage
of this mechanism to give the errors structured details.
In an ideal future, I think all exception handlers that look (or
should look) for a JsonableError would use its contents in structured
form, never mentioning `msg`; but the majority of error sites might
continue to just instantiate JsonableError with a string message. The
latter is the simplest thing to do, and probably most error types will
never have code looking for them specifically.
Because the new API refactors the `to_json_error_msg` method which was
designed for subclasses to override, update the 4 subclasses that did
so to take full advantage of the new API instead.
In order to benefit from the modern conveniences of type-checking,
add concrete, non-Any types to the interface for JsonableError.
Relatedly, there's no need at this point to duck-type things at
the places where we receive a JsonableError and try to use it.
Simplify those by using straightforward standard typing.
Tornado reloads the app whenever there is a change in code. Due to this,
new connection is created to the client which also results in a new
channel. To avoid creating two channels for the queue in the RabbitMQ
broker we should close the old channel. Otherwise messages sent to the
queue will be distributed among these two channels in a round robin
scheme and we will end up losing one message since one of the channels
doesn't have an active consumer.
This commit closes the connection to the queue whenever Tornado reloads
the application using add_reload_hook().
Fixes#5824.
I pushed a bunch of commits that attempted to introduce
the concept of `client_message_id` into our server, as
part of cleaning up our codepaths related to messages you
sent (both for the locally echoed case and for the host
case).
When we deployed this, we had some strange failures involving
double-echoed messages and issues advancing the pointer that appeared
related to #5779. We didn't get to the bottom of exactly why the PR
caused havoc, but I decided there was a cleaner approach, anyway.
We are deprecating local_id/local_message_id on the Python server.
Instead of the server knowing about the client's implementation of
local id, with the message id = 9999.01 scheme, we just send the
server an opaque id to send back to us.
This commit changes the name from local_id -> client_message_id,
but it doesn't change the actual values passed yet.
The goal for client_key in future commits will be to:
* Have it for all messages, not just locally rendered messages
* Not have it overlap with server-side message ids.
The history behind local_id having numbers like 9999.01 is that
they are actually interim message ids and the numerical value is
used for rendering the message list when we do client-side rendering.
This makes it possible for Zulip administrators to delete messages.
This is primarily intended for use in deleting early test messages,
but it can solve other problems as well.
Later we'll want to play with the permissions model for this, but for
now, the goal is just to integrate the feature.
Note that it saves the deleted messages for some time using the same
approach as Zulip's message retention policy feature.
Fixes#135.
In the future, the type annotation should use Deque in order to be
compatible with the latest mypy version. See
https://github.com/python/mypy/pull/2845 for more info.
Most of this code was simply moved from activity.js with some
minor renaming of functions like set_presence_info -> set_info.
Some functions were slightly nontrivial extractions:
is_not_offline:
came from activity.huddle_fraction_present
get_status/get_mobile:
simple getters
set_user_status:
partial extraction from activity.set_user_status
last_active_date:
pulled out of admin.js code
We also fixed activity.filter_and_sort to take user_ids.
This fixes an issue where if you saved a Python file (even just
changing whitespace) while casper tests were running, the Tornado
server being used would restart, triggering a confusing error like
this:
ReferenceError: Can't find variable: $
Traceback:
undefined:2
:4
Suite explicitly interrupted without any message given.
Change `from django.utils.timezone import now` to
`from django.utils import timezone`.
This is both because now() is ambiguous (could be datetime.datetime.now),
and more importantly to make it easier to write a lint rule against
datetime.datetime.now().
This reverts commit 7bf10ec74f.
Apparently, SockJS 1.1.1 is broken with the browser used in our legacy
desktop app, resulting in messages being silently not sent.
- Add websocket client to create connection with SockJS websocket server.
It contains callback method to launch after connection setup.
- Add '--websocket' parameter to 'check_send_receive_time' script to
check websocket connection.
- Add testing websocket connection to production installation checking.
- Add cronjob to launch websocket connection nagios test.
This makes it possible for Zulip Nagios monitoring to check for
problems impacting the websockets sending code path, which is what all
web users use.
Expose `is_mentioned` in `message` dict which contains
boolean value about our account is mentioned in the message
content or not.
This is already available via `flags`, but it seems worth making this
data point more explicit, given its importance in writing bots.
Fixes#2667.
A few functions had arguments removed without having their type
annotations updated accordingly. As a result mypy version 0.4.6
thinks that the first type in the annotation is supposed to be
the type of `self`, a new mypy feature which we are not intending
to use here.