As we have changed the tab selector above from "Settings" to "Personal
settings", we can simply change "Your bots" to "Bots" as "Bots" is
clear enough given the personal settings context.
We also need to update the API documentation for bots accordingly.
This commit fixes the documentation of settings as we have
replaced "Your account" section into two new sections -
"Profile" and "Account & privacy".
This commit also fixes a comment in the test for settings
documentation in test_middleware.py.
JsonableError has two major benefits over json_error:
* It can be raised from anywhere in the codebase, rather than
being a return value, which is much more convenient for refactoring,
as one doesn't potentially need to change error handling style when
extracting a bit of view code to a function.
* It is guaranteed to contain the `code` property, which is helpful
for API consistency.
Various stragglers are not updated because JsonableError requires
subclassing in order to specify custom data or HTTP status codes.
The previous string was bold, potentially confusing, and doesn't
explain clearly what's happening. We replace this with a string that's
more or less copied from what we do in email notifications with the
similar setting enabled.
When a user has disabled message content in mobile push notifications,
we send a fixed string (currently "REDACTED") as the content of the
notification. Previously, this string was not tagged for translation;
we fix that here.
Additionally, because mobile push notifications are generated in a
queue worker, they do not have the user's language set by the Django
middleware. Our email notifications solve that problem using
`override_language`; we do the same here.
We choose to do override_language in get_message_payload_apns and
get_message_payload_gcm, rather than the caller, in order to be
consistent with tests.
Tested end-to-end by tabbott by setting a translation for "REDACTED"
manually in German.
Fixes#18713.
We modify check_send_webhook_message to make it accept three new
parameters: only_events and exclude_events that are retrieved using REQ,
and complete_event_type, which is passed by the incoming webhook view
that is filtered according to the former two parameters.
Part of #18525.
Since FIXTURE_DIR_NAME is the name of the folder that contains the view
and tests modules of the webhook and another folder called "fixtures" that
store the fixtures, it is more appropriate to call it WEBHOOK_DIR_NAME,
especially when we want to refer to the view module using this variable.
* Move content on moving topics between streams to a dedicated
article. We advertise it as "move content" to hint that one can move
messages or split topics, and link to it.
* This deletes change-the-topic-of-a-message, because the same content
is already covered in rename-a-topic.
* This commit mostly just moves content between articles. Most of that
content was redundant with the first few paragraphs of the surviving
"rename a topic" article. The former "This is useful for" se ntence
was adapted to the remaining article.
* This commit also adds a redirect for the removed article, and
updates related links.
This adds a new class called MessageRenderingResult to contain the
additional properties we added to the Message object (like alert_words)
as well as the rendered content to ensure typesafe reference. No
behavioral change is made except changes in typing.
This is a preparatory change for adding django-stubs to the backend.
Related: #18777
This PR adds a basic .md template that is followed by lot of /api
pages. Since we have recently done the migration work to ensure that
our REST API documentation pages for individual endpoints are almost
all identical files following a common pattern, we can now get the
payoff of deleting them all in favor of a shared template.
This removes 2000 lines of somewhat finicky configuration from the
codebase, and thus should save significant effort when documenting new
API endpoints in the future.
The markdown files for endpoints or other pages which deviate from the
standard template remain, and the docs are instead generated from
those files using the existing system.
Currently, the message that no parameters are accepted by
the endpoint is displayed if there are no parameters in
OpenAPI data, but it is possible that information is
encoded in x-parameter-description (example in upload-file
endpoint), and we want to display that information rather
than the message.
Added an if condition to check the same.
This removes some complexity from the event_queue module.
To avoid code duplication, we reduce the `is_notifiable` methods to
internally just call the `trigger` methods and check their return value.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
While importing a realm, the stream dictionaries in data['zerver_stream']
already contains the field named `rendered_description`, which is set to
`""`. This lead the code to assume that the stream rendered_description
was already set, due to which, it was not setting the rendered_description
field for any stream.
This is a prep commit for adding realm-level default for various
user settings. We add the language, in which the invite email will
be sent, to the dict added to queue itself to avoid making queries
in a loop when sending multiple emails from queue.
We also handle the case for old events in the queue.
We removed the use of email_body field in 47fcb27e39, but was
still passed in events from do_resend_user_invite_email and
in tests. So this commit removes the email_body field from
these places.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
This is much faster than calling generate_presigned_url each time.
```
In [3]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.0010945796966552734
```
Fixes#18915
This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.
Before the change:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
6.408717393875122
```
After:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.48990607261657715
```
This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
Previously, it was possible for an unusual series of topic-edit
actions to result in Notification Bot reporting that a topic was
marked as resolved that had already been marked as resolved, etc.
A buggy client might send a message_edit request to change the topic
field, sending the current topic as the new value. Previously, we
would treat that as a normal request to edit the topic; now we act as
though the API request had not requested a topic change. In the
common case that only the topic was in the edit request, this now
results in an error that should help client implementations identify
their bug.
This fixes a bad interaction with the "unresolve topic" logic, which
assumed that upstream logic had verified that the topic was actually
changing.
* Have the `get_active_presence_idle_user_ids` function look at all the
user data, not just `private_message` and `mentioned`.
* Fix a couple of incorrect `missedmessage_hook` tests, which did not
catch the earlier behaviour.
* Add some comments to the tests for this function for clarity.
* Add a helper to create `UserMessageNotificationsData` objects from the
user ID lists. This will later help us deduplicate code in the event_queue
logic.
This fixes a bug which earlier existed, that if a user turned on stream
notifications, and received a message in that stream which did not mention
them, they wouldn't be in the `presence_idle_users` list, and hence would
never get notifications for that message.
Note that, after this commit, users might still not get notifications in
the above scenarios in some cases, because the downstream logic in the
notification queue consumers sometimes erroneously skips sending
notifications for stream messages.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
We now encode resolved topics with just:
U+2714 HEAVY CHECK MARK, SPACE
Previously, the encoding was unintentionally this:
U+2714 HEAVY CHECK MARK, U+FE0F VARIATION SELECTOR-16, SPACE
The language_list_dbl_col parameter in the page_params
is used by only the web client frontend. The value is
calculated in the backend and then passed as a page_param
which is unnecessary considering that the whole process
is beneficial for the front_end only. Hence move the entire
calculation code to the frontend.
Fixes part of #18673.
default_language_name was a part of page_params which is actually
redundant considering that we already have language_list and
default_language available to frontend which can be used to
get the default_language_name and hence prevents the backend
from sending an additional parameter.
Fixes part of #18673.
This commit replaces the allow_community_topic_editing boolean with
integer field edit_topic_policy and includes both frontend and
backend changes.
We also update settings_ui.disable_sub_settings_onchange to not
change the color of label as we did previously when the setting
was a checkbox. But now as the setting is dropdown we keep the
label as it is and we don't do anything with label when disabling
dropdowns. Also, this function was used only here so we can safely
change this.
We do not need the 'topic_name is None' check in this function as this is
called only when atleast one of the content and topic_name is not None,
and this condition cannot be true as there is 'content is not None'
check just before it.
Thus, 'if topic_name is None' condition being true means that both content
and topic_name are None which is not possible as this function itself will
not be called in such case. An assert statement is added to check that
topic_name is not None to make sure that it is handled when the function
is called in some other way later.
This is a prep change for calling `get_active_presence_idle_user_ids`
after we have collected all user data variables, so that that function
does not erroneously skip some user IDs from not having the complete
data.
We will in later commits, extend this class to contain methods
to determine if a message is notifiable or not, but for now
we only turn it into a dict and pass it on.
This gives us a single place where all user data for the message
send event is calculated, and is a prep change for introducing
a TypedDict or dataclass to keep this data toghether.
For this extraction, we need to move some context
parameter (from home_real in `views/home.py`) to extra
page_params parameter (of
build_page_params_for_home_page_load in
`lib/home.py`) so handlebars template can access them.
While moving I confirmed that these parameters are not
used elsewhere if some parameter is used elsewhere
(like `apps_page_url`) then I didn't remove it from the
context list, I just added it to the page_params list.
Fixes: #18795.
This is a prep commit to extract the gear menu as a
handlebars template.
We are renaming `enable_marketing_emails_enabled` to
`corporate_enabled` as it will be also used in the
handlebars template of the gear menu.
This results in moving the `zulip_merge_base` parameter to
page_params, so that it's available to JavaScript.
Since this is technically a tiny overlay, it needs to be initialized
before hashchange.js.
The headings for return values were currently hardcoded
in cases where they occur, but they can be rendered directly
in the markdown extension if the return values exist.
This causes avatars and emoji which are hosted by Zulip in S3 (or
compatible) servers to no longer go through camo. Routing these
requests through camo does not add any privacy benefit (as the request
logs there go to the Zulip admins regardless), and may break emoji
imported from Slack before 1bf385e35f,
which have `application/octet-stream` as their stored Content-Type.
Earlier, the notification-blocking for messages from muted senders
was a side-effect of we never sending notifications for messages
with the "read" flag.
This commit decouples these two things, as a prep for having new
settings which will allow users to **always** receive email
notifications, including when/if they read the message during the
time the notifications is in the queue.
We still mark muted-sender messages as "read" when they are sent,
because that's desirable anyways.
Fixes#17277.
The main limitation of this implementation is that the sync happens if
the user authing already exists. This means that a new user going
through the sign up flow will not have their custom fields synced upon
finishing it. The fields will get synced on their consecutive log in via
SAML in the future. This can be addressed in the future by moving the
syncing code further down the codepaths to login_or_register_remote_user
and plumbing the data through to the user creation process.
We detail that limitation in the documentation.
The old type in default_settings wasn't right - limit_to_subdomains is a
List[str]. We define a TypeDict for capturing the typing of the settings
dict more correctly and to allow future addition of configurable
attributes of other non-str types.
This is a prep change for importing (and using) `dataclasses.field`
elsewhere in the same file, because pyflakes would throw "Import
module shodowed" errors otherwise.
Rename poll_timeout to event_queue_longpoll_timeout_seconds
and change its value from 90000 ms to 90 sec. Expose its
value in register api response when realm data is fetched.
Bump API_FEATURE_LEVEL to 74.
Expose the boolean value server_needs_upgrade in the
responses for register api so that it can be used
by mobile and terminal clients as well.
Highlighted in api changelog as part of
feature level 74 in commit fb93c96
(next commit).
Shift functions used for compatibility from
zerver.lib.home (is_outdated_server) and
zerver.view.compatibility (pop_numerals,
version_lt, find_mobile_os,
is_outdated_desktop_app, is_unsupported_browser)
to zerver.lib.compatibility module.
This locks the message row while a reaction is being added/removed,
which will handle race conditions caused by deleting the message
at the same time.
We make sure that events work happens outside the transaction,
so that in case there's some problem with the queue processor, the
locks aren't held for too long.
As a nice side-effect, we also handle race conditions from double
adding reactions, because once the message is locked, a duplicate
request will wait till the earlier transaction commits, and hence
will not throw `IntegrityErrors`s (rather, will be handled in our
safety check in the /views code itself), which earlier had to be
handled explicitly.
This locks the message while creating a submessage, which
will handle race conditions caused by deleting the message
simultaneously.
We make sure that events work happens outside the transaction,
so that in case there's a problem with the queue processor,
the locks aren't held for too long.
Further commits will start locking the message rows while
adding related fields like reactions or submessages,
to handle races caused by deleting the message itself at the
same time.
The message locking implemented then will create a possibility
of deadlocks, where the related field transaction holds a lock
on the message row, and the message-delete transaction holds a
lock on the database row of the related field (which will also
need to be deleted when the message is deleted), and both
transactions wait for each other.
To prevent such a deadlock, we lock the message itself while
it is being deleted, so that the message-delete transaction
will have to wait till the other transaction (which is about
to delete the related field, and also holds a lock on the
message row) commits.
https://chat.zulip.org/#narrow/near/1185943 has more details.
Further commits will hook `send_event` calls to `on_commit`
in some cases. This change will make it easier to test such
situations.
We don't need to actually capture the callbacks, because the
events sent are already tested via the list in which they are
captured by `tornado_redirected_to_list`.
This commit fixes a bug where moving messages between streams was
not allowed for non-admins when allow_community_topic_editing was
set to false and move_messages_between_streams_policy was set to
Realm.POLICY_MEMBERS_ONLY.
The bug is fixed by calling can_edit_content_or_topic only when
topic or content edit is there and not in the case where only
message is moved from one stream to another.
This commit extracts the logic of checking the message edit permissions,
like whether the sender is same as user, whether it is a (no topic)
message or whether community topic editing is allowed, into a separate
function.
This is a prep commit for fixing a bug where permission to move messages
between streams is affected by permission of editing topics.
Previously when enforcing the check to do not allow editing topics
after a certain time, we were checking whether 'content is None' and
considering it as that if content is None then there must be topic
edit.
But after adding support for moving messages between streams it can be
the case that we are neither changing topic nor content and just moving
streams, and the original code raises error if this is done after the
time limit of editing topics, which is wrong.
This commit fixes this by actually checking 'topic_name is not None'.
This should help with #17425, where messages with lots of LaTeX are
lost, due to the large expansion factor.
This isn't a total fix for this - large messages with lots of LaTeX
can still end up larger than 1MB, and rendering could timeout, but
this fix should help significantly.
1MB is still small enough that I don't expect we'll run into any DOS
problems - my testing didn't show any problems rendering messages that
contain ~1MB of LaTeX.
This will offer users who are self-hosting to adjust
this value. Moreover, this will help to reduce the
overall time taken to test `test_markdown.py` (since
this can be now overridden with `override_settings`
Django decorator).
This is done as a prep commit for #18641.
Checked the email looked OK in `/emails` for both creating realm and
registering within an existing one.
Not sure zerver/tests/test_i18n.py test has been suppressed correctly.
Fixes#17786.
d66cbd2832 added these mentioning
"always_notify" for some reason, but always_notify clearly isn't a real
thing in this context so the comments need to be fixed to eliminate this
potential source of confusion.
Our current logic only allows S3 block storage providers whose
upload URL matches with the format used by AWS. This also allows
other styles such as the "virtual host" format used by Oracle cloud.
Fixes#17762.
These checks are more related to the API than the editability
or permissions logic, so it makes sense to handle them first
before further processing the request.
Also split the main test class to separate out the tests for
this logic.
This also simplifies some tests by reducing the data setup
required to reach failure.
Tweaked by tabbott to avoid losing the topic_name.strip().
modified_user=sub_info.user and modified_stream=sub_info.stream, added
by commit 6d1f9de7d3 (#16553), were
always coming from the last entry in the loop above, not from the
enclosing list comprehension.
Found by the Pylint rule undefined-loop-variable.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The old name `push_notify_user_ids` was misleading, because
it does not contain user ids which should be notified for
the current message, but rather user ids who have the online
push notifications setting enabled.
When the Tornado server is restarted during an upgrade, if
server has old events with the `push_notify_user_ids` fields,
the server will throw error after this rename. Hence, we need
to explicitly handle such cases while processing the event.
We should only show the referrer name in subject of invitation emails,
and show only 'Zulip' in the 'From' header. This helps in preventing
the email from being marked as suspicious by the detection systems
when they see an employee's name as sender of an email sent from an
unrelated domain.
The behavior is already the same for reminder invitation emails where
we do not show name and only 'Zulip' in the 'From' header.
Fixes#18256.
?dl=1 causes Dropbox to send Content-Type: application/binary, which
can’t be interpreted by Camo. Use ?raw=1 instead.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Commit 1a7ddd9ea3 “Fix
UserActivityInterval overlap bug” introduced a mathematically
incorrect assertion about how intervals work. There’s a third way two
intervals could overlap: both the start and end of the old interval
could be inside the new interval. This probably can’t happen here
because the old interval should be at least as long as the new
interval. However, a correct overlap test can be formulated in a
simpler way anyway.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is will make it easier to systematically use Django's
`capturOnCommitCallbacks` in tests outside of the main
`test_events` file which involve assertions on events.
Convert this function that absolutely makes a stream web public.
We already have do_change_stream_invite_only to convert
streams to public and private streams.
We also update all the fields that should be set when a stream
is made web public.
Since this is currently only useful to interpret presence data, we
send this only if presence is requested.
I'm not sure that server_timestamp is the right name for this field,
but ultimately it should match the main presence API format.
After re-assignment, mypy will still think the type of
`widget_content` to be `str`, not `Dict`. So we need to
create a new variable.
This is a prep change for stronger type checking in this
code.
This parameter has never been used, and causes an unnecessary database
query.
We keep the num_push_devices_for_user function, since we may have uses
for it down the line.
Fixes part of #14166.
The command:
codespell --skip='./locale,*.svg,./docs/translating,postgresql.conf.template.erb,.*fixtures,./yarn.lock,./docs/THIRDPARTY,./tools/setup/emoji/emoji_names.py,./tools/setup/emoji/emoji_map.json,./zerver/management/data/unified_reactions.json' --ignore-words=codespell_ignore_words.txt .
The content of codespell_ignore_words:
```
te
ans
pullrequest
ist
cros
wit
nwe
circularly
ned
ba
ressemble
ser
sur
hel
fpr
alls
nd
ot
```