This removes some complexity from the event_queue module.
To avoid code duplication, we reduce the `is_notifiable` methods to
internally just call the `trigger` methods and check their return value.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
AUTHENTICATION LINE variable needs to be set after each
line executed, but in the current code, it wasn't being
set in endpoints whose files were removed in favour of
the pages being generated directly from OpenAPI data.
Moved the block to set AUTHENTICATION LINE in the loop
which executes each command, which fixes the bug.
While importing a realm, the stream dictionaries in data['zerver_stream']
already contains the field named `rendered_description`, which is set to
`""`. This lead the code to assume that the stream rendered_description
was already set, due to which, it was not setting the rendered_description
field for any stream.
This is a prep commit for adding realm-level default for various
user settings. We add the language, in which the invite email will
be sent, to the dict added to queue itself to avoid making queries
in a loop when sending multiple emails from queue.
We also handle the case for old events in the queue.
We removed the use of email_body field in 47fcb27e39, but was
still passed in events from do_resend_user_invite_email and
in tests. So this commit removes the email_body field from
these places.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all parameters
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all responses
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
This is much faster than calling generate_presigned_url each time.
```
In [3]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.0010945796966552734
```
Fixes#18915
This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.
Before the change:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
6.408717393875122
```
After:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.48990607261657715
```
This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
Previously, it was possible for an unusual series of topic-edit
actions to result in Notification Bot reporting that a topic was
marked as resolved that had already been marked as resolved, etc.
A buggy client might send a message_edit request to change the topic
field, sending the current topic as the new value. Previously, we
would treat that as a normal request to edit the topic; now we act as
though the API request had not requested a topic change. In the
common case that only the topic was in the edit request, this now
results in an error that should help client implementations identify
their bug.
This fixes a bad interaction with the "unresolve topic" logic, which
assumed that upstream logic had verified that the topic was actually
changing.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'default_twenty_four_hour_time' and
'default_language', others will be removed separately.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit simply changes the indentation of the
block and replaces the command being tested
with each element of the commands array. This
was split for an easier review.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit adds a commands variable to store
all the curl commands in HTML using regex.
Now, include and exclude configuration
are fetched from openapi data, and only
one type can be encoded for every example.
This removes the need for the assertion to
test if both include and exclude are present
since at a time, only one can be present.
This commit adds support for using the
x-curl-examples-parameters parameter in OpenAPI
data to fetch curl examples configuration. This
also contains any descriptions necessary for each
example, and directly generates all possible
curl examples directly.
A follow-up commit is needed to modify the templates
accordingly.
As a goal of moving towards a common template,
we need to fetch curl examples' configurations
directly from openapi data instead of having them
hardcoded in templates. This commit introduces
x-curl-examples-parameters to store the configs
for the same.
* Have the `get_active_presence_idle_user_ids` function look at all the
user data, not just `private_message` and `mentioned`.
* Fix a couple of incorrect `missedmessage_hook` tests, which did not
catch the earlier behaviour.
* Add some comments to the tests for this function for clarity.
* Add a helper to create `UserMessageNotificationsData` objects from the
user ID lists. This will later help us deduplicate code in the event_queue
logic.
This fixes a bug which earlier existed, that if a user turned on stream
notifications, and received a message in that stream which did not mention
them, they wouldn't be in the `presence_idle_users` list, and hence would
never get notifications for that message.
Note that, after this commit, users might still not get notifications in
the above scenarios in some cases, because the downstream logic in the
notification queue consumers sometimes erroneously skips sending
notifications for stream messages.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
This is separate from the next commit for ease of testing.
To verify that the compatibility code works correctly, all message send
and event_queue tests from our test suite should pass on just this commit.
We now encode resolved topics with just:
U+2714 HEAVY CHECK MARK, SPACE
Previously, the encoding was unintentionally this:
U+2714 HEAVY CHECK MARK, U+FE0F VARIATION SELECTOR-16, SPACE
The language_list_dbl_col parameter in the page_params
is used by only the web client frontend. The value is
calculated in the backend and then passed as a page_param
which is unnecessary considering that the whole process
is beneficial for the front_end only. Hence move the entire
calculation code to the frontend.
Fixes part of #18673.
default_language_name was a part of page_params which is actually
redundant considering that we already have language_list and
default_language available to frontend which can be used to
get the default_language_name and hence prevents the backend
from sending an additional parameter.
Fixes part of #18673.