Throwing an exception is excessive in case of this worker, as it's
expected for it to time out sometimes if the urls take too long to
process.
With a test added by tabbott.
This allows specific queue workers to override the defaut behavior and
implement their own response to the timer expiring. We will want to use
this for embed_links queue at least.
The list of possible values of all settings was re-defined in
do_test_realm_update_api and we can instead use the list defined
in models.py which is used to validate values in views/realm.py.
There is no problem in order of values as we always initialize the
first value of the list.
This has also added some more values to test for a couple of
settings as a result.
We change the test_change_realm_default_language to test only invalid
value and rename it to test_invalid_realm_default_language, because
we already test whether the value is changed correctly or not in
do_test_realm_update_api.
This commit removes test_change_bot_creation_policy which is used
to test changing bot_creation_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_email_address_visibility which is used
to test changing email_address_visibility using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value is
also tested in test_invalid_integer_attribute_values.
This commit removes test_change_invite_to_stream_policy which is used
to test changing invite_to_stream_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_invite_to_realm_policy which is used
to test changing invite_to_realm_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_move_messages_between_streams_policy
which is used to test changing move_messages_between_streams_policy
using 'PATCH /realm' endpoint as we already do this in
do_test_realm_update_api and invalid value is also tested in
test_invalid_integer_attribute_values.
This commit removes test_user_group_edit_policy which is used
to test changing user_group_edit_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
This commit removes test_private_message_policy which is used to
test changing private_message_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid
value is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_wildcard_mention_policy which is
used to test changing wildcard_mention_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_stream_creation_policy which is
used to test changing create_stream_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
Currently, the "Home" link at the top takes one to the doc root,
i.e., /help or /api. This is a little misleading since "Home"
seems to be more synonymous with the Zulip homepage.
This commit adds a proper backlink to the top logo that takes you to
the homepage and renames "Home" to be more specific. The text after
"|" will now take you to the doc root instead (/help or /api). Note
that this allows us to link the /help and /api pages from the
homepage while ensuring that backlinks allow the visitor to get back
to the homepage.
Currently, there is no provision of rendering
additional imports automatically, and those examples
were hardcoded in the templates.
This commit adds an openapi parameter
to store the additional imports required for the endpoint.
Further, it changes code to replace `import zulip`
with the modified imports.
The reason for this bug is because of different striping
processes in the backend and frontend, i.e The frontend
checks if the message's `raw_content` has changed to
decide if the `content` of the message should be sent in
the request to the backend, or not. So, it removes the
leading new line ('\n') from the message `raw_content`
when checking it, which is causing the "Error saving edit:
You don't have permission to edit this message" error.
This commit fixes it by removing the leading new line
when cleaning message content.
The bug was explained by @punchagan and its solution
by @timabbott.
This change allow check_webhook to raise an error when a message is
sent and vice versa. This is useful when one payload is not expecting
any output messages.
In addition to event filtering, we add support for registering supported
events for a webhook integration using the webhook_view decorator.
The event types are stored in the view function directly as a function
attribute, and can be later accessed via the module path and the view
function name are given (which is already specified the integrations.py)
Note that the WebhookTestCase doesn't know the name of the view function
and the module of the webhook. WEBHOOK_DIR_NAME needs to be overridden
if we want exceptions to raised when one of our test functions triggered
a unspecified event, but this practice is not enforced.
all_event_type does not need to be given even if event filters are used
in the webhook. But if a list of event types is given, it will be possible
for us to include it in the documentation while ensuring that all the
tested events are included (but not vice versa at the current stage, as
we yet not required all the events included in the list to be tested)
This guarantees that we can always access the list of all the tested
events of a webhook. This feature will be later plumbed to marcos to
display all event types dynamically in doc.md.
This commit migrates the `right_sidebar.html` Django template
to handlebars by creating a new file as `right_sidebar.hbs`
which is then rendered using `ui_init` module.
It also removes the tests in `test_home` due to the template
migration, since these elements aren't rendered on the backend
anymore.
We also remove `test_compute_show_invites_and_add_streams*`.
Fixes part of #18792.
This commit migrates the `left_sidebar.html` Django template
to handlebars by creating a new file as `left_sidebar.hbs`
which is then rendered using `ui_init` module.
These are the minor changes introduced by virtue of template
migration -
- The `compute_show_invites_and_add_streams` function now
only concerns with the invite_to_realm_policy.
- Renamed the `compute_show_invites_and_add_streams` function
to `compute_show_invites` due to the above change.
- Fixes relevant `test_home.py` tests due to the above
changes.
Fixes part of #18792.
We will later use this data to include text like:
`<sender> mentioned @<user_group>` instead of the current
`<sender> mentioned you` when someone mentions a user group
the current user is a part of in email/push notification.
Part of #13080.
We will use this later to display which user group was mentioned
in push and email notifications.
`mentioned_user_group_ids` is kept as a List (not Set) to ensure proper
test coverage of the function, since it depends on the order of iteration,
and we cannot change the order of iteration for a set (which we'll need
to do for proper testing).
Part of #13080.
Commit 81d7dd1fda broke this nearly
eight years ago, so probably nobody cares except the ever-watchful eye
of mypy.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The absence of __init__.py was preventing mypy from following any of
the zerver.openapi imports. These errors were being silenced by
ignore_missing_imports.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
An organization with at most 5 users that is behind on payments isn't
worth spending time on investigating the situation.
For larger organizations, we likely want somewhat different logic that
at least does not void invoices.
get_public_upload_root_url and construct_public_upload_url_base were
both doing basically the same thing in the same. We deduplicate this,
making them share the same code, using the approach from
get_public_upload_root_url of using urljoin.
Using a format string is not a great idea, as it doesn't handle the case
of the URL already having parts that will be interpreted as format
string metacharacters. On the downside, this approach negatively affects
performance:
```
...: s = time.time()
...: for i in range(0, 250):
...: r = u.get_public_upload_url("foo")
...: print(time.time()-s)
0.020366191864013672
```
up from 0.001 before this change.
As we have changed the tab selector above from "Settings" to "Personal
settings", we can simply change "Your bots" to "Bots" as "Bots" is
clear enough given the personal settings context.
We also need to update the API documentation for bots accordingly.
Some response descriptions weren't migrated into
OpenAPI data, which was done for other API pages.
This commit migrates the response descriptions for
error-handling page.
This commit fixes the documentation of settings as we have
replaced "Your account" section into two new sections -
"Profile" and "Account & privacy".
This commit also fixes a comment in the test for settings
documentation in test_middleware.py.
JsonableError has two major benefits over json_error:
* It can be raised from anywhere in the codebase, rather than
being a return value, which is much more convenient for refactoring,
as one doesn't potentially need to change error handling style when
extracting a bit of view code to a function.
* It is guaranteed to contain the `code` property, which is helpful
for API consistency.
Various stragglers are not updated because JsonableError requires
subclassing in order to specify custom data or HTTP status codes.
The previous string was bold, potentially confusing, and doesn't
explain clearly what's happening. We replace this with a string that's
more or less copied from what we do in email notifications with the
similar setting enabled.
When a user has disabled message content in mobile push notifications,
we send a fixed string (currently "REDACTED") as the content of the
notification. Previously, this string was not tagged for translation;
we fix that here.
Additionally, because mobile push notifications are generated in a
queue worker, they do not have the user's language set by the Django
middleware. Our email notifications solve that problem using
`override_language`; we do the same here.
We choose to do override_language in get_message_payload_apns and
get_message_payload_gcm, rather than the caller, in order to be
consistent with tests.
Tested end-to-end by tabbott by setting a translation for "REDACTED"
manually in German.
Fixes#18713.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'allow_message_deleting' and others will
be removed separately.
This helper will be used to check whether
the user is allowed to edit user groups or
not. Currently it is not used, but will
be used in next commit where we will
refactor the user_group_edit_policy to use
COMMON_POLICY_TYPES.
We modify check_send_webhook_message to make it accept three new
parameters: only_events and exclude_events that are retrieved using REQ,
and complete_event_type, which is passed by the incoming webhook view
that is filtered according to the former two parameters.
Part of #18525.
Since FIXTURE_DIR_NAME is the name of the folder that contains the view
and tests modules of the webhook and another folder called "fixtures" that
store the fixtures, it is more appropriate to call it WEBHOOK_DIR_NAME,
especially when we want to refer to the view module using this variable.
* Move content on moving topics between streams to a dedicated
article. We advertise it as "move content" to hint that one can move
messages or split topics, and link to it.
* This deletes change-the-topic-of-a-message, because the same content
is already covered in rename-a-topic.
* This commit mostly just moves content between articles. Most of that
content was redundant with the first few paragraphs of the surviving
"rename a topic" article. The former "This is useful for" se ntence
was adapted to the remaining article.
* This commit also adds a redirect for the removed article, and
updates related links.
This commit fixes the bug of always showing
day-mode realm logo when color scheme display
setting is set to automatic but the OS setting
is dark theme. This is because we cannot check
the OS setting on backend and we need to set
the logo url accordingly in frontend only.
So, we remove the logo url computation from
backend completely and instead compute it in
the frontend only.
Fixes#18778.
This commits ports the `search_operators.html` file from
./templates to handlebars, essentially creating a new file
as `search_operators.hbs` within /static/templates which is
then rendered using info_overlays.js.
As part of this migration, we rewrote the way internationalization was
done, since the previous implementation incorrectly did not support
languages with a different word order than English.
We also not consistently use periods at the end of the descriptions.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Fixes#18504.
This commits ports the `keyboard_shortcuts.html` file from
using the Django template to handlebars, essentially creating
a new file as `keyboard_shortcuts.hbs` within /static/templates
which is then rendered using info_overlays.js.
Fixes part of #18792.
This command was part of the complex migration to introduce the
`unread_msgs` data structure as the source of truth for unreads.
Effectively, it's a migration to remove anomalies that we ran several
times before turning it into the final 0104_fix_unreads.py migration.
Fixes part of #18898.
This command is part of a statsd infrastructure that we stopped
supporting years ago. Its only purpose for some time has been to
provide sample code for how the restart script might trigger a
notification to a graphing system, which doesn't justify maintaining
it.
Fixes part of #18898.
This command was part of early prototyping of the digests feature, and
in particular its purpose is better served via the organization-level
setting to control digest emails for the organization.
Fixes part of #18898.
This command was written to allow generating multiuse invite links
before the "Invite a user" UI supported them. It no longer has a
purpose and can be safely deleted.
Fixes part of #18898.
This command predates there being a normal UI for inviting users to
Zulip. It no longer has a role for which it's a better way to do
things. (Especially with upcoming API documentation for the endpoint).
Fixes part of #18898.
This command was introduced in 2013 via
6d6c3364dc as part of implementing
marking messages as read in a separate process for performance reasons.
We fixed the performance issues and removed that pipeline years ago,
but forgot to delete this.
Fixes part of #18898.
This adds a new class called MessageRenderingResult to contain the
additional properties we added to the Message object (like alert_words)
as well as the rendered content to ensure typesafe reference. No
behavioral change is made except changes in typing.
This is a preparatory change for adding django-stubs to the backend.
Related: #18777
Add a new `verify_signup` helper function, which currently implements
enough functionality to be used by `test_signup_existing_email`.
This is the first step towards #7564.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
test_create_stream_policy_setting (in class StreamAdminTest) and
test_user_settings_for_creating_streams (in class SubscriptionAPITest)
test essentially the same thing.
So, remove one of them.
Removing test_create_stream_policy_setting makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can create streams.
test_invite_to_stream_by_invite_period_threshold (in class StreamAdminTest)
and test_user_settings_for_subscribing_other_users
(in class SubscriptionAPITest) test essentially the same thing.
So, remove one of them.
Removing test_invite_to_stream_by_invite_period_threshold makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can invite other users.
This was used to test can_create_stream property of a guest user.
There are better ways to test it, which are already implemented in
test_can_create_streams.
This PR adds a basic .md template that is followed by lot of /api
pages. Since we have recently done the migration work to ensure that
our REST API documentation pages for individual endpoints are almost
all identical files following a common pattern, we can now get the
payoff of deleting them all in favor of a shared template.
This removes 2000 lines of somewhat finicky configuration from the
codebase, and thus should save significant effort when documenting new
API endpoints in the future.
The markdown files for endpoints or other pages which deviate from the
standard template remain, and the docs are instead generated from
those files using the existing system.
The returned values of get_path function would be
expanded soon, and defining a dataclass would make
the code cleaner for returning and using the fields.
As a part of goal of moving towards a common template,
the hardcoded python tabs need to be removed to ensure
that endpoints which don't have python examples can be
covered by the common template as well.
This commit also modifies the markdown extension for python
examples to render empty string in case the examples don't
exist, which would allow it to be called whether the endpoint
has python examples or not.
Currently, the message that no parameters are accepted by
the endpoint is displayed if there are no parameters in
OpenAPI data, but it is possible that information is
encoded in x-parameter-description (example in upload-file
endpoint), and we want to display that information rather
than the message.
Added an if condition to check the same.
We use the "does not accept any parameters" language in the common
template that we'll be migrating to shortly, so we remove this
variance (And adjust its test).
This removes some complexity from the event_queue module.
To avoid code duplication, we reduce the `is_notifiable` methods to
internally just call the `trigger` methods and check their return value.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
AUTHENTICATION LINE variable needs to be set after each
line executed, but in the current code, it wasn't being
set in endpoints whose files were removed in favour of
the pages being generated directly from OpenAPI data.
Moved the block to set AUTHENTICATION LINE in the loop
which executes each command, which fixes the bug.
While importing a realm, the stream dictionaries in data['zerver_stream']
already contains the field named `rendered_description`, which is set to
`""`. This lead the code to assume that the stream rendered_description
was already set, due to which, it was not setting the rendered_description
field for any stream.
This is a prep commit for adding realm-level default for various
user settings. We add the language, in which the invite email will
be sent, to the dict added to queue itself to avoid making queries
in a loop when sending multiple emails from queue.
We also handle the case for old events in the queue.
We removed the use of email_body field in 47fcb27e39, but was
still passed in events from do_resend_user_invite_email and
in tests. So this commit removes the email_body field from
these places.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all parameters
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all responses
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
This is much faster than calling generate_presigned_url each time.
```
In [3]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.0010945796966552734
```
Fixes#18915
This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.
Before the change:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
6.408717393875122
```
After:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.48990607261657715
```
This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
Previously, it was possible for an unusual series of topic-edit
actions to result in Notification Bot reporting that a topic was
marked as resolved that had already been marked as resolved, etc.
A buggy client might send a message_edit request to change the topic
field, sending the current topic as the new value. Previously, we
would treat that as a normal request to edit the topic; now we act as
though the API request had not requested a topic change. In the
common case that only the topic was in the edit request, this now
results in an error that should help client implementations identify
their bug.
This fixes a bad interaction with the "unresolve topic" logic, which
assumed that upstream logic had verified that the topic was actually
changing.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'default_twenty_four_hour_time' and
'default_language', others will be removed separately.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit simply changes the indentation of the
block and replaces the command being tested
with each element of the commands array. This
was split for an easier review.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit adds a commands variable to store
all the curl commands in HTML using regex.
Now, include and exclude configuration
are fetched from openapi data, and only
one type can be encoded for every example.
This removes the need for the assertion to
test if both include and exclude are present
since at a time, only one can be present.
This commit adds support for using the
x-curl-examples-parameters parameter in OpenAPI
data to fetch curl examples configuration. This
also contains any descriptions necessary for each
example, and directly generates all possible
curl examples directly.
A follow-up commit is needed to modify the templates
accordingly.
As a goal of moving towards a common template,
we need to fetch curl examples' configurations
directly from openapi data instead of having them
hardcoded in templates. This commit introduces
x-curl-examples-parameters to store the configs
for the same.
* Have the `get_active_presence_idle_user_ids` function look at all the
user data, not just `private_message` and `mentioned`.
* Fix a couple of incorrect `missedmessage_hook` tests, which did not
catch the earlier behaviour.
* Add some comments to the tests for this function for clarity.
* Add a helper to create `UserMessageNotificationsData` objects from the
user ID lists. This will later help us deduplicate code in the event_queue
logic.
This fixes a bug which earlier existed, that if a user turned on stream
notifications, and received a message in that stream which did not mention
them, they wouldn't be in the `presence_idle_users` list, and hence would
never get notifications for that message.
Note that, after this commit, users might still not get notifications in
the above scenarios in some cases, because the downstream logic in the
notification queue consumers sometimes erroneously skips sending
notifications for stream messages.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
This is separate from the next commit for ease of testing.
To verify that the compatibility code works correctly, all message send
and event_queue tests from our test suite should pass on just this commit.
We now encode resolved topics with just:
U+2714 HEAVY CHECK MARK, SPACE
Previously, the encoding was unintentionally this:
U+2714 HEAVY CHECK MARK, U+FE0F VARIATION SELECTOR-16, SPACE