We removed the use of email_body field in 47fcb27e39, but was
still passed in events from do_resend_user_invite_email and
in tests. So this commit removes the email_body field from
these places.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all parameters
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all responses
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
This is much faster than calling generate_presigned_url each time.
```
In [3]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.0010945796966552734
```
Fixes#18915
This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.
Before the change:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
6.408717393875122
```
After:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.48990607261657715
```
This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
Previously, it was possible for an unusual series of topic-edit
actions to result in Notification Bot reporting that a topic was
marked as resolved that had already been marked as resolved, etc.
A buggy client might send a message_edit request to change the topic
field, sending the current topic as the new value. Previously, we
would treat that as a normal request to edit the topic; now we act as
though the API request had not requested a topic change. In the
common case that only the topic was in the edit request, this now
results in an error that should help client implementations identify
their bug.
This fixes a bad interaction with the "unresolve topic" logic, which
assumed that upstream logic had verified that the topic was actually
changing.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'default_twenty_four_hour_time' and
'default_language', others will be removed separately.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit simply changes the indentation of the
block and replaces the command being tested
with each element of the commands array. This
was split for an easier review.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit adds a commands variable to store
all the curl commands in HTML using regex.
Now, include and exclude configuration
are fetched from openapi data, and only
one type can be encoded for every example.
This removes the need for the assertion to
test if both include and exclude are present
since at a time, only one can be present.
This commit adds support for using the
x-curl-examples-parameters parameter in OpenAPI
data to fetch curl examples configuration. This
also contains any descriptions necessary for each
example, and directly generates all possible
curl examples directly.
A follow-up commit is needed to modify the templates
accordingly.
As a goal of moving towards a common template,
we need to fetch curl examples' configurations
directly from openapi data instead of having them
hardcoded in templates. This commit introduces
x-curl-examples-parameters to store the configs
for the same.
* Have the `get_active_presence_idle_user_ids` function look at all the
user data, not just `private_message` and `mentioned`.
* Fix a couple of incorrect `missedmessage_hook` tests, which did not
catch the earlier behaviour.
* Add some comments to the tests for this function for clarity.
* Add a helper to create `UserMessageNotificationsData` objects from the
user ID lists. This will later help us deduplicate code in the event_queue
logic.
This fixes a bug which earlier existed, that if a user turned on stream
notifications, and received a message in that stream which did not mention
them, they wouldn't be in the `presence_idle_users` list, and hence would
never get notifications for that message.
Note that, after this commit, users might still not get notifications in
the above scenarios in some cases, because the downstream logic in the
notification queue consumers sometimes erroneously skips sending
notifications for stream messages.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
This is separate from the next commit for ease of testing.
To verify that the compatibility code works correctly, all message send
and event_queue tests from our test suite should pass on just this commit.
We now encode resolved topics with just:
U+2714 HEAVY CHECK MARK, SPACE
Previously, the encoding was unintentionally this:
U+2714 HEAVY CHECK MARK, U+FE0F VARIATION SELECTOR-16, SPACE
The language_list_dbl_col parameter in the page_params
is used by only the web client frontend. The value is
calculated in the backend and then passed as a page_param
which is unnecessary considering that the whole process
is beneficial for the front_end only. Hence move the entire
calculation code to the frontend.
Fixes part of #18673.
default_language_name was a part of page_params which is actually
redundant considering that we already have language_list and
default_language available to frontend which can be used to
get the default_language_name and hence prevents the backend
from sending an additional parameter.
Fixes part of #18673.
This is a follow-up for 98f8d94b25.
For cases when url_format_string is like https://example.com/%%(foo)s/%(bar)s
group_match_regex should only detect `bar` as the intended
parameter and not `foo`.
We now validate the linkifier urls and patterns together, and add
the following additional checks:
1. All groups in the pattern must be used in the URL format string.
2. All groups in the URL format string must be declared in the pattern.
Linkifier pattern is now validated inside the `clean` method.
`filter_pattern_validator` is moved from `clean_fields` to `clean`
method as a safe check. As a result of this, a Puppeteer test case
is updated.
NOTE: The changes here are IN ADDITION to the existing validations.
Fixes#16482.
Co-authored-by: akshatdalton <akshat.dak@students.iiit.ac.in>
This is a prep commit for adding members, full members and moderator
options to edit_topic_policy. As we will be adding tests for these
options, we will need to add a login statment repeatedly and this
helps us in avoiding that.
This is a prep commit for adding moderators, full
members and member roles in edit_topic_policy.
As we add these new options, we will add tests with
user with all these roles and thus we would need to
login as iago repeatedly when changing parameters.
So, to avoid this we instead login as Iago in
set_message_editing_params itself.
This commit replaces the allow_community_topic_editing boolean with
integer field edit_topic_policy and includes both frontend and
backend changes.
We also update settings_ui.disable_sub_settings_onchange to not
change the color of label as we did previously when the setting
was a checkbox. But now as the setting is dropdown we keep the
label as it is and we don't do anything with label when disabling
dropdowns. Also, this function was used only here so we can safely
change this.
We do not need the 'topic_name is None' check in this function as this is
called only when atleast one of the content and topic_name is not None,
and this condition cannot be true as there is 'content is not None'
check just before it.
Thus, 'if topic_name is None' condition being true means that both content
and topic_name are None which is not possible as this function itself will
not be called in such case. An assert statement is added to check that
topic_name is not None to make sure that it is handled when the function
is called in some other way later.
Currently, the `admin_config` configuration was
hardcoded in the templates, but as a goal of creating
a common template, we need to move all configurations
outside.
Moved the checking for function and language which need
admin_config out of templates into the code and added a
boolean `x-admin-config` to store whehter the operation
requires admin config.
Also, added the banner for admin access to auto-generate
if admin_config is present, and fixed the admin_config
for endpoints that were earlier missing it in the templates.
When an unauthenticated user tries to access the /plans page, we
redirect to /accounts/login/?next=plans (note the missing slash
before "plans"). After the user is authenticated, they are then
redirected to /accounts/login/plans, which is an invalid URL. The
correct URL should be just /plans.
This commit solves this by prefixing the "plans" in the query
parameter with a forward slash, which results in the correct
redirect URL, i.e., /plans.
This is a prep change for calling `get_active_presence_idle_user_ids`
after we have collected all user data variables, so that that function
does not erroneously skip some user IDs from not having the complete
data.
Unlike `receiver_is_off_zulip`, fetching from a dict is pretty cheap,
so we can calculate `online_push_enabled` along with the other
variables.
This is a prep change to start using the dataclass introduced in the
earlier commit in this code.
We will in later commits, extend this class to contain methods
to determine if a message is notifiable or not, but for now
we only turn it into a dict and pass it on.
This gives us a single place where all user data for the message
send event is calculated, and is a prep change for introducing
a TypedDict or dataclass to keep this data toghether.