This commit adds "user_settings_object" field to
client_capabilities which will be used to determine
if the client needs 'update_display_settings' and
'update_global_notifications' event.
Return zulip_merge_base alongside zulip_version
in `/register`, `/event` and `/server_settings`
endpoint so that the value can be used by other
clients.
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.
This is necessary to break the uncollectable reference cycle created
by our ‘request_notes.saved_response = json_response(…)’, Django’s
‘response._resource_closers.append(request.close)’, and Python’s
https://bugs.python.org/issue44680.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This prevents a memory leak arising from Python’s inability to collect
a reference cycle from a WeakKeyDictionary value to its key
(https://bugs.python.org/issue44680).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This concludes the HttpRequest migration to eliminate arbitrary
attributes (except private ones that are belong to django) attached
to the request object during runtime and migrated them to a
separate data structure dedicated for the purpose of adding
information (so called notes) to a HttpRequest.
This includes the migration of fields that require trivial changes
to be migrated to be stored with ZulipRequestNotes.
Specifically _requestor_for_logs, _set_language, _query, error_format,
placeholder_open_graph_description, saveed_response, which were all
previously set on the HttpRequest object at some point. This migration
allows them to be typed.
We will no longer use the HttpRequest to store the rate limit data.
Using ZulipRequestNotes, we can access rate_limit and ratelimits_applied
with type hints support. We also save the process of initializing
ratelimits_applied by giving it a default value.
We create a class called ZulipRequestNotes as a new home to all the
additional attributes that we add to the Django HttpRequest object.
This allows mypy to do the typecheck and also enforces type safety.
Most of the attributes are added in the middleware, and thus it is
generally safe to assert that they are not None in a code path that
goes through the middleware. The caller is obligated to do manual
the type check otherwise.
This also resolves some cyclic dependencies that zerver.lib.request
have with zerver.lib.rate_limiter and zerver.tornado.handlers.
* `stream_name`: This field is actually redundant. The email/push
notifications handlers don't use that field from the dict, and they
anyways query for the message, so we're safe in deleting this field,
even if in the future we end up needing the stream name.
* `timestamp`: This is totally unused by the email/push notification
handlers, and aren't sent to push clients either.
* `type` is used only for the push notifications handler, since only
push notifications can be revoked, so we move them to only run there.
We will later use this data to include text like:
`<sender> mentioned @<user_group>` instead of the current
`<sender> mentioned you` when someone mentions a user group
the current user is a part of in email/push notification.
Part of #13080.
JsonableError has two major benefits over json_error:
* It can be raised from anywhere in the codebase, rather than
being a return value, which is much more convenient for refactoring,
as one doesn't potentially need to change error handling style when
extracting a bit of view code to a function.
* It is guaranteed to contain the `code` property, which is helpful
for API consistency.
Various stragglers are not updated because JsonableError requires
subclassing in order to specify custom data or HTTP status codes.
This removes some complexity from the event_queue module.
To avoid code duplication, we reduce the `is_notifiable` methods to
internally just call the `trigger` methods and check their return value.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
This is separate from the next commit for ease of testing.
To verify that the compatibility code works correctly, all message send
and event_queue tests from our test suite should pass on just this commit.
Unlike `receiver_is_off_zulip`, fetching from a dict is pretty cheap,
so we can calculate `online_push_enabled` along with the other
variables.
This is a prep change to start using the dataclass introduced in the
earlier commit in this code.
This gives us a single place where all user data for the message
send event is calculated, and is a prep change for introducing
a TypedDict or dataclass to keep this data toghether.
We have already calculated these values, so storing them should not cause
significant performance degradation.
This is a prep chenge for sending a few more flags through internal_data,
namely if `sender_is_muted`.
* In `event_queue.py`, only the sender and recipient users who have muted
the sender will have the "read" flag set.
* We already skip enqueueing notifications for users who've muted the sender
after 58da384da3.
* The queue consume functions for email and push notifications already
check filter messages which have been read before sending notifications.
* So, the "read" logic in `event_queue.py` is unnecessary, and the
processing power saved from not enqueueing notifications for a single
user should be insignificant, so we remove these checks all toghether.
Earlier, the notification-blocking for messages from muted senders
was a side-effect of we never sending notifications for messages
with the "read" flag.
This commit decouples these two things, as a prep for having new
settings which will allow users to **always** receive email
notifications, including when/if they read the message during the
time the notifications is in the queue.
We still mark muted-sender messages as "read" when they are sent,
because that's desirable anyways.
The old name `push_notify_user_ids` was misleading, because
it does not contain user ids which should be notified for
the current message, but rather user ids who have the online
push notifications setting enabled.
When the Tornado server is restarted during an upgrade, if
server has old events with the `push_notify_user_ids` fields,
the server will throw error after this rename. Hence, we need
to explicitly handle such cases while processing the event.
This ensures it is present for all requests; while that was already
essentially true via process_client being called from every standard
decorator, this allows middleware and other code to rely on this
having been set.
This help mobile and terminal clients understand whether a server
restart changed API feature levels or not, which in turn determines
whether they will need to resynchronize their data.
Also add tests and documentation for this previously undocumented
event type.
Fixes: #18205.
This extends the /json/typing endpoint to also accept
stream_id and topic. With this change, the requests
sent to /json/typing should have these:
* `to`: a list set to
- recipients for a PM
- stream_id for a stream message
* `topic`, in case of stream message
along with `op`(start or stop).
On receiving a request with stream_id and topic, we send
typing events to clients with stream_typing_notifications set
to True for all users subscribed to that stream.
Since c3a8a15bae removed the last
instance of code using the dictionary code path, we actually need to
wait until one can no longer upgrade directly from 4.x to master in
order to avoid breakage should we remove this compatibility code,
since only today did we stop generating the old event format.
The bulk deletion codepath was using dicts instead of user ids in the
event, as opposed to the other codepath which was adjusted to pass just
user ids before. We make the bulk codepath consistent with the other
one. Due to the dict-type events happening in 3.*, we move the goal for
deleting the compat code in process_notification to 5.0.
django.utils.translation.ugettext is a deprecated alias of
django.utils.translation.gettext as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
A bug in the implementation of the all_public_streams API feature
resulted in guest users being able to receive message traffic to public
streams that should have been only accessible to members of the
organization.
This makes it much more clear that this feature does JSON encoding,
which previously was only indicated in the documentation.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The Session middleware only adds `Vary: cookie` if it sees an access
to the from inside of it. Because we are effectively, from the Django
session middleware's point of view, returning the static content of
`request.saved_response` and never accessing the session, it does not
set `Vary: cookie` on longpoll requests.
Explicitly mark Tornado requests as varying by cookie.
A few internal fields used for tracking which types of notifications
have already been sent for a given message, like `hander_id` and the
`push_notified` bundle of fields were being incorrectly included in
message events delivered to clients clients.
One could argue these fields might be useful hints to clients, but
because notifications can be triggered later on via
`missedmessage_hook`, they have no useful purpose in the API.
This commit move these extended event field on a `internal_data`
object within the event object, and delete this field in `contents()`
for call points that would serve data to clients.
Tweaked by tabbott to provide a cleaner interface.
We're not bumping API_FEATURE_LEVEL because these fields have always
been documented as being present only due to a bug, so no clients
should be expecting or relying on them.
Fixes: #15947.
This logging is really only potentially interesting in a development
environment when the numbers are nonzero.
In production, it seems worth logging for consistency reasons.
Probably we'll eventually redo this block by change the log level, but
this is good enough to despam the development environment startup
output.
The `no_proxy` parameter does not work to remove proxying[1]; in this
case, since all requests with this adapter are to the internal Tornado
process, explicitly pass in an empty set of proxies to disable
proxying.
[1] https://github.com/psf/requests/issues/4600
Having both of these is confusing; TORNADO_SERVER is used only when
there is one TORNADO_PORT. Its primary use is actually to be _unset_,
and signal that in-process handling is to be done.
Rename to USING_TORNADO, to parallel the existing USING_RABBITMQ, and
switch the places that used it for its contents to using
TORNADO_PORTS.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
tornado.web.Application does not share any inheritance with Django at
all; it has a similar router interface, but tornado.web.Application is
not an instance of Django anything.
Refold the long lines that follow it.
While urllib3 retries all connection errors, it only retries a subset
of read errors, since not all requests are safe to retry if they are
not idempotent, and the far side may have already processed them once.
By default, the only methods that are urllib3 retries read errors on
are GET, TRACE, DELETE, OPTIONS, HEAD, and PUT. However, all of the
requests into Tornado from Django are POST requests, which limits the
effectiveness of bb754e0902.
POST requests to `/api/v1/events/internal` are safe to retry; at worst,
they will result in another event queue, which is low cost and will be
GC'd in short order.
POST requests to `/notify_tornado` are _not_ safe to retry, but this
codepath is only used if USING_RABBITMQ is False, which only occurs
during testing.
Enable retries for read errors during all POSTs to Tornado, to better
handle Tornado restarts without 500's.
Without an explicit port number, the `stdout_logfile` values for each
port are identical. Supervisor apparently decides that it will
de-conflict this by appending an arbitrary number to the end:
```
/var/log/zulip/tornado.log
/var/log/zulip/tornado.log.1
/var/log/zulip/tornado.log.10
/var/log/zulip/tornado.log.2
/var/log/zulip/tornado.log.3
/var/log/zulip/tornado.log.7
/var/log/zulip/tornado.log.8
/var/log/zulip/tornado.log.9
```
This is quite confusing, since most other files in `/var/log/zulip/`
use `.1` to mean logrotate was used. Also note that these are not all
sequential -- 4, 5, and 6 are mysteriously missing, though they were
used in previous restarts. This can make it extremely hard to debug
logs from a particular Tornado shard.
Give the logfiles a consistent name, and set them up to logrotate.
These weren’t wrong since orjson.JSONDecodeError subclasses
json.JSONDecodeError which subclasses ValueError, but the more
specific ones express the intention more clearly.
(ujson raised ValueError directly, as did json in Python 2.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The exception trace only goes from where the exception was thrown up
to where the `logging.exception` call is; any context as to where
_that_ was called from is lost, unless `stack_info` is passed as well.
Having the stack is particularly useful for Sentry exceptions, which
gain the full stack trace.
Add `stack_info=True` on all `logging.exception` calls with a
non-trivial stack; we omit `wsgi.py`. Adjusts tests to match.
By defaults, `requests` has no timeout on requests, which can lead to
waiting indefinitely. Add a half-second timeout on these; this is
applied _inside_ each retry, not overall -- that is, with retries any
of these functions may take a total of 1.5s.
Use the `no_proxy` proxy, which explicitly disables proxy usage for
particular hosts. This is a slightly cleaner solution than ignoring
all of the environment, as removing proxies is specifically what we
are attempting to accomplish.
The change in #2764 provided a better error message on one of the
three calls into Tornado, but left the other two with the old error
message. `raise_for_status` was used on two out of three.
Use a custom HTTPAdapter to apply this pattern to all requests from
Django to Tornado.
Apparently, `update_message` events unexpectedly contained what were
intended to be internal data structures about which users were
mentioned in a given message.
The bug has been present and accumulating new data structures for
years.
Fixing this should improve the performance of handling update_message
events as well as cleaning up this API's interface.
This was discovered by our automated API documentation schema checking
tooling detecting these unexpected elements in these event
definitions; that same logic should prevent future bugs like this from
being introduced in the future.
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)