This is a prep commit for adding feature of restricting
user access to guests such that we can keep the code
easy to read and understand when that feature is added.
Earlier, email message notifications included prior messages sent
to the same topic for context. This is more confusing than helpful
for messages that the user is likely to have received notifications
for all the prior messages in the conversation already (or read them
in the Zulip UI).
Now, we include prior context only when the user is mentioned via
personal, group, stream or topic wildcard mention.
Fixes#27479.
The event for stream typing notifications is no longer sent
to the long_term_idle subscribers of the stream.
This helps to reduce the tornado's work of parsing super-long
JSON-encoded lists of user IDs in large streams. Now the lists
are shorter.
This will be used in gear menu to inform admin of their
sponsorship application status.
This includes some additional tweaks for when to show
billing and plans to users.
- Replaces the "Via Markdown" tab with "Via drag-and-drop", and
modifies the instructions to explain that you can drag and drop
anywhere in the app, whether or not the compose box is open.
- Adds "Via paste" tab for the copy-pasting instructions.
Fixes#26894.
Previously, cross realm bots were not displayed as mention Pills.
This is because, the data set for validating mentions considers
only the realm id which is None in case of cross realm bots.
Hence, adding an or Q object to it, to also check if
the email is a part of the cross realm bots email, in case the
realm id returns None.
Fixes#26913
Earlier, the 'wildcard_mentioned' flag was set for both the
stream and topic wildcard mentions.
Now, the 'topic_wildcard_mentioned' flag is set for topic
wildcard mentions, and the 'wildcard_mentioned' flag is set for
stream wildcard mentions.
We will rename the 'wildcard_mentioned' flag to
'stream_wildcard_mentioned' in a later commit.
This commit renames the two unused and historical bits of the
'fields' bitfield of the 'UserMessage' and 'ArchivedUserMessage'
tables.
* 'summarize_in_home' to 'topic_wildcard_mentioned'
* 'summarize_in_stream' to 'group_mentioned'
The 'group_mentioned' flag doesn't affect the feature,
but completing the work here helps to save future migration
and indexing efforts on the UserMessage table, as we plan to
use this flag in the future for group mentions.
The unused bits may have old data; we'll clear that in
a separate commit.
It creates the 'zerver_usermessage_any_mentioned_message_id'
index concurrently.
We now send "realm_user/update" (and "realm_bot/update" for bots)
events with "is_active" field when deactivating and reactivating
users, including bots.
We would want to use "remove" event for a user losing access
to another user for #10970, so it is better to use "update"
event for deactivation as we only update "is_active" field
in the user objects and the clients still have the data for
deactivated users.
Previously, we used to send "add" event for reactivation along
with complete user objects, but clients should have the data
for deactivated users as well, so an "update" event is enough
like we do when deactivating users.
This commit adds code to pass configuration objects for group
permission settings in register response to clients such that
we do need to duplicate that data in clients and can avoid
future bugs due to inconsistency.
The "server_supported_permission_settings" field is included
in the response if "realm" is present in "fetch_event_types",
as this is what we do for other server-related fields.
This commit moves constants for system group names to a new
"SystemGroups" class so that we can use these group names
in multiple classes in models.py without worrying about the
order of defining them.
We now pass the complete configuration object for a setting to
access_user_group_for_setting instead of passing the configuration
object's fields as different variables.
This commit renames permissions_configuration variable to
permission_configuration since the object contains config for
a single permission setting and thus permission_configuration
seems like a better name.
Previous behavior-
- Guest did not receive stream creation events for new
web-public streams.
- Guest did not receive peer_add and peer_remove events
for web-public and subscribed public streams.
This commit fixes the behavior to be -
- Guests now receive stream creation events for new
web-public streams.
- Guest now receive peer_add and peer_remove events for
web-public and subscribed public streams.
We use `Realm.default_language` value, which is set by selecting
the 'Organization language', to internationalize the introductory
messages of the initial streams.
Fixes#25729.
In this commit, we add a new dropdown 'Organization language' on
the `/new` and `/realm/register` pages. This dropdown allows setting
the language of the organization during its creation. This allows
messages from Welcome Bot and introductory messages in streams to be
internationalized.
Fixes a part of #25729.
Adds a new onboarding email `onboarding_team_to_zulip` for the user
who created the new Zulip organization.
Co-authored by: Alya Abbott <alya@zulip.com>
The comment has drifted away from where it should be placed within the
code and also talks about RealmCounts specifically, while we have other
object types that this equally applies to.
The former name is kind of misleading - this function is for the remote
server to send analytics to the push bouncer. Under our usual
terminology, a "remote server" is a self-hosted Zulip server. So data is
sent FROM not TO a remote server.
Documents the procedure to subscribe / unsubscribe a user via their
profile and general stream settings.
Both methods are separated into tabs in the documentation.
Fixes#26902
Originally, this was how the notification emails worked, but that was changed
in 797a7ef97b, with this old behavior
available as an option.
The footer and from address of emails that are sent when this
setting is set to True are confusing, especially when more people
are involved in a stream and since we have changed the way we send
emails, it should be removed. It’s also not widely used.
Fixes#26609.
This commit renames "default" views to "home" views in the setting
labels, keyboard shortcuts list, help documentation and its urls.
This commit does not do changes in variable and class names, setting
field in database, API docs and changelog.
Fixes part of #27251.
Add an optional `automatic_new_visibility_policy` enum field
in the success response to indicate the new visibility policy
value due to the `automatically_follow_topics_policy` and
`automatically_unmute_topics_in_muted_streams_policy` user settings
during the send message action.
Only present if there is a change in the visibility policy.
- Adds desktop/web instructions.
- Adds #inbox relative link for logged-in users.
- Moves Inbox up in the left sidebar just under "Reading strategies".
- Moves Inbox article content to Markdown include.
- Adds "From the Inbox view" section to "Finding a topic to read",
"Getting started with Zulip", and "Reading strategies".
- Documents Inbox as a new option for the default web app view.
- Removes unused Markdown link.
- Tweaks subheading to better match help center patterns.
- Add Inbox option in "Configure default settings for new users".
- Adds new tabbed section and instructions for marking messages as
read and reading topics via the Inbox view.
Fixes#26903.
Co-authored-by: Alya Abbott <alya@zulip.com>
This is a prep commit that extracts the following two methods
from '/actions/scheduled_messages' to reuse in the next commit.
* extract_stream_id
* extract_direct_message_recipient_ids
The 'to' parameter for 'POST /typing' will follow the same pattern
in the next commit as we currently have for the 'to' parameter in
'POST /scheduled_messages', so we can reuse these functions.
This commit replaces the value `private` with `direct` in the
`message_type` field for the `typing` events sent when a user
starts or stops typing a message.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
Earlier, when we used 'self.send_message()' in the backend tests,
the sent message was not marked as read for the sender.
Reason: To set the read flag, we have to check if
'message.sent_by_human()'. It returns False because the
'sending_client' for tests is "test suite" and the 'sent_by_human'
function doesn't enlist the "test suite" client name as a human client.
This commit adds "test suite" to that list.
Also fixes a bug in when apply_unread_message_event was called that
was revealed by this change.
Updates the testing for draft event schemas to be fully checked by
`zerver/tests/test_events.py` and `tools/check-schema`.
Also, corrects the type for the timestamp field in Draft objects
in the OpenAPI documentation.
Updates the testing for scheduled message event schemas to be fully
checked by `zerver/tests/test_events.py` and `tools/check-schema`.
Adds the missing 'failed' field to the scheduled message events
in `web/tests/lib/events.js` as well.
We add `Content-Disposition: inline` header to commonly supported
video MIME types so that when we `Open` them in lightbox, they
play in new tab.
This will require a follow-up database migration to apply to
previously uploaded videos.
This excludes the legacy webhook from the
"realm_incoming_webhook_bots" object as those do not have the same URL
format as modern webhook integrations.
This fixes a regression introduced in
9954db4b59, where the realm's default
language would be ignored for users created via API/LDAP/SAML,
resulting in all such users having English as their default language.
The API/LDAP/SAML account creation code paths don't have a request,
and thus cannot pull default language from the user's browser.
We have the `realm.default_language` field intended for this use case,
but it was not being passed through the system.
Rather than pass `realm.default_language` through from each caller, we
make the low-level user creation code set this field, as that seems
more robust to the creation of future callers.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
This reverts b8581e2895. The mobile
client on Android parses this field using:
```kotlin
timeMs = data.require("time").parseLong("time") * 1000
```
This throws an error if value is not `long` (i.e. an integer),
resulting in dropped notifications on Android from servers which had
deployed b8581e2895.
Switch back to sending an integer, but keep the behaviour from
fd6091ad17 where we send the timestamp in the payload of both
Android and Apple push notifications.
Rather than fetch all UserMessage rows for all streams, and subtract
those out in Python-space from the list of all Message rows the user
may have received -- do this via a "NOT EXISTS" subquery. This is
much better indexed (performing in fractions of milliseconds rather
than hundreds), and also consumes much less memory.
This kind of payload that's loaded from json in the body of the request
is not only used for webhooks, but also in the push bouncer, and may get
used elsewhere too - so a general name is better.
Earlier, 'is_row_muted' returned 'true' if the message was in
a muted stream or muted topic.
If the message is in an unmuted or followed topic in a muted
stream, such topics should be treated as not muted topics
in an unmuted stream.
This commit fixes the incorrect behavior.
Now, for wildcard mentions, 'unread_msgs.mentions' exclude
the IDs in muted streams only if the message is in default or
muted topic.
Also, 'unread_msgs.count' takes into account the unreads in unmuted
or followed topics in muted streams too.
Documents that this bug was fixed in the API changelog.
Update 'get_muted_stream_ids' to return a set of IDs
instead of a list.
This will help to avoid linear time search operations later
while using 'if stream_id in muted_streams_ids'.
This prep commit renames the 'build_topic_mute_checker' function
to 'build_get_topic_visibility_policy' and updates it to support
all the visibility policies.
The function prefetches the visibility policies the user has
configured for various topics and prepares a dict named
'topic_to_visibility_policy' to be used later on.
These queries benefit from the increased specificity of using the
realm / recipient / sender indexes. The argument from 11a1cb9630
does not apply in these cases, since there are only 2 usermessage rows
for each matching message row for DMs, and few more than that for
huddles.
This query has two halves; messages set by the user, and messages
received by the user. The former uses the already-specific
usermessage privatemessage flag index; the latter relies on the
recipient index on messages.
Add the realm_id to the latter half, so that the recipient_id is
paired with the realm_id.
Clarifies that the `all` field in the `op: "add"` event is only
relevant for the `"read"` message flag, and that it will be false
for all other specified flags in theses events.
Deprecates the `all` field in the `op: "remove"` event and document
that it is false for all specified flags.
Updates the deprecated `operation` field description and makes
a few other small revisions to the event text for clarity and
accuracy.
This commit adds a `jitsi_server_url` field to the Realm model, which
will be used to save the URL of the custom Jitsi Meet server. In
the database, `None` will encode the server-level default. We can't
readily use `None` in the API, as it could be confused with "field not
sent". Therefore, we will use the string "default" for this purpose.
We have also introduced `server_jitsi_server_url` in the `/register`
API. This will be used to display the server's default Jitsi server
URL in the settings UI.
The existing `jitsi_server_url` will now be calculated as
`realm_jitsi_server_url || server_jitsi_server_url`.
Fixes a part of #17914.
Co-authored-by: Gaurav Pandey <gauravguitarrocks@gmail.com>
The unique index on `(user_id, message_id)` that is the
`zerver_usermessage` table is rather specific, and even the PostgreSQL
extended statistics are not enough for it to realize there is a
correlation between the `realm_id` in the message table and the
`user_id` in the usermessage table. This means that adding the
`realm_id` limit when there is a join to `zerver_usermessage` flips
the query plan from a nested loop of unique usermessage index-only
scan, with an index scan of the messages pkey -- to a parallel hash
join of the messages limit with a index scan of just the user_id limit
on usermessages. It thinks this is necessary because it thinks that
the `realm_id` limit may remove a large number of messages from the
usermessage set -- which is totally untrue.
Remove the `realm_id` limit if we have a usermessage join.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The `expected` flag was incredibly confusing, as you
couldn't tell from the calling code what you were
actually expecting to happen.
I avoid the context manager idiom in order to force
the callers to create simple helper functions, and
I de-duplicate some code in some places.
I also force the caller to explicitly soft-deactivate
the user with one simple line of code, so that the
person reading the test doesn't have to research
the side effects of the helper. (And I make it
very easy for new authors to follow the practice
going forward.)
This is also somewhat of a prep commit to avoid
the obfuscated use of refresh_from_db.
The get_user function is poorly named, but I don't want to
sweep the entire codebase yet.
It's also nice to have a test wrapper for little experiments
like profiling tests or hunting down calls to refresh_from_db.
It's possible that we would also just change the new wrapper
to more directly call Django. The `get_user` function isn't
used in a ton of real-world places, so we might want the test
code to just bypass the cache.
I add a bunch of cute helper methods to make
the test a bit more readable.
And then I make sure to get clean objects,
which precludes the need for our callback
functions to refresh the user objects.
And finally I make sure that our validation
functions don't cause any round trips (assuming
we have fetched objects using a standard
Zulip helper, which example_user ensures.)
In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.
Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.
So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).
The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.
The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
This adds support for syncing user role via the newly added "role"
attribute, which can be set to either of
['owner', 'administrator', 'moderator', 'member', 'guest'].
Removes durable=True from the atomic decorator of do_change_user_role,
as django-scim2 runs PATCH operations in an atomic block.
Since the cache is flushed when the cutoff or realm changes, the
maximum size of the cache should cap out at the number of streams in
the realm. Raise the max cache size, now that this will not simply
lead to useless cache space for smaller servers.
There is now no longer any reason to have the scheduled_email
enqueuing wait until all of the users' contexts have been generated.
Switch to returning the contexts as an iterator, and send them as we
compute them.
The query plan for fetching recent messages from the arbitrary set of
streams formed by the intersection of 30 random users can be quite
bad, and can descend into a sequential scan on `zerver_recipient`.
Worse, this work of pulling recent messages out is redone if the
stream appears in the next batch of 30 users.
Instead, pull the recent messages for a stream on a one-by-one basis,
but cache them in an in-memory cache. Since digests are enqueued in
30-user batches but still one-realm-at-a-time, work will be saved both
in terms of faster query plans whose results can also be reused across
batches.
This requires that we pull the stream-id to stream-name mapping for
_all_ streams in the realm at once, but that is well-indexed and
unlikely to cause performance issues -- in fact, it may be faster
than pulling a random subset of the streams in the realm.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Matching the topic exactly, as opposed to case-insensitively, is not a
common operation, and one that we want to make difficult to do
accidentally. Inline the single use case of it.
This algorithm existed in multiple places, with different queries.
Since we only access properties in the UserMessage table, we
standardize on the much simpler and faster Index Only Scan, rather
than a merge join.
When searching for links inside a topic name, the question mark (?)
was used to split the topic. If a URL had a query after the URL
(e.g., "?foo=bar"), then the query was trimmed from the URL.
Removing the question mark from `basic_link_splitter` is sufficient
to fix this issue. The `get_web_link_regex` function then removes
the trailing punctuation if any, including literal question marks.
Fixes#26368.
When there was no space right after `/todo` but there was content on a
new line, the message would be rendered plainly, not as a todo widget.
This was because we split on only the space character to then check if
the first token was a valid widget.
Now we split on both spaces and newlines to extract the widget name,
irrespective of whether it is followed by a space or a newline. This
results in the message being rendered as a todo widget as expected.
Rename existing shortened references to demo organizations, like
`is_demo_org` or `demo-org-warning`, that have been used in the
codebase so far and replace them to be like the `models.py`
variable: `Realm.demo_organization_scheduled_deletion_date`.
This REDOS was not exploitable, as its content is only read from
checked-in files; regardless, simplify it to not backtrack. We also
do not actually have any location which use leading or trailing
whitespace, so remove those optional bits.
This function is used by almost all webhooks.
To support it, we use the "api_ignore_parameter" flag so that positional
arguments like topic and body that are not intended to be parsed from
the request can be ignored.
This demonstrates the use of BaseModel to replace a check_dict_only
validator.
We also add support to referring to $defs in the OpenAPI tests. In the
future, we can descend down each object instead of mapping them to dict
for more accurate checks.
We want to reject ambiguous type annotations that set ApiParamConfig
inside a Union. If a parameter is Optional and has a default of None, we
prefer Annotated[Optional[T], ...] over Optional[Annotated[T, ...]].
This implements a check that detects Optional[Annotated[T, ...]] and
raise an assertion error if ApiParamConfig is in the annotation. It also
checks if the type annotation contains any ApiParamConfig objects that
are ignored, which can happen if the Annotated type is nested inside
another type like List, Union, etc.
Note that because
param: Annotated[Optional[T], ...] = None
and
param: Optional[Annotated[Optional[T], ...]] = None
are equivalent in runtime prior to Python 3.11, there is no way for us
to distinguish the two. So we cannot detect that in runtime.
See also: https://github.com/python/cpython/issues/90353
The goal of typed_endpoint is to replicate most features supported by
has_request_variables, and to improve on top of it. There are some
unresolved issues that we don't plan to work on currently. For example,
typed_endpoint does not support ignored_parameters_supported for 400
responses, and it does not run validators on path-only arguments.
Unlike has_request_variables, typed_endpoint supports error handling by
processing validation errors from Pydantic.
Most features supported by has_request_variables are supported by
typed_endpoint in various ways.
To define a function, use a syntax like this with Annotated if there is
any metadata you want to associate with a parameter, do note that
parameters that are not keyword-only are ignored from the request:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: Annotated[int, ApiParamConfig(path_only=True)],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=NTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are also some shorthands for the commonly used annotated types,
which are encouraged when applicable for better readability and less
typing:
```
WebhookPayload = Annotated[Json[T], ApiParamConfig(argument_type_is_body=True)]
PathOnly = Annotated[T, ApiParamConfig(path_only=True)]
```
Then the view function above can be rewritten as:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: PathOnly[int],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=INTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are some intentional restrictions:
- A single parameter cannot have more than one ApiParamConfig
- Path-only parameters cannot have default values
- argument_type_is_body is incompatible with whence
- Arguments of name "request", "user_profile", "args", and "kwargs" and
etc. are ignored by typed_endpoint.
- positional-only arguments are not supported by typed_endpoint. Only
keyword-only parameters are expected to be parsed from the request.
- Pydantic's strict mode is always enabled, because we don't want to
coerce input parsed from JSON into other types unnecessarily.
- Using strict mode all the time also means that we should always use
Json[int] instead of int, because it is only possible for the request
to have data of type str, and a type annotation of int will always
reject such data.
typed_endpoint's handling of ignored_parameters_unsupported is mostly
identical to that of has_request_variables.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is important because the "guests" value isn't one that we'd
expect anyone to pick intentionally, and in particular isn't an
available option for the similar/adjacent "email invitations" setting.
This commit rename the existing setting `Who can invite users to this
organization` to `Who can send email invitations to new users` and
also renames all the variables related to this setting that do not
require a change to the API.
This was done for better code readability as a new setting
`Who can create invite links` will be added in future commits.
This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
This commit adds id_field_name field to GroupPermissionSetting
type which will be used to store the string formed by concatenation
of setting_name and `_id`.
This commit makes the database changes while creating internal_realm
to be done in a single transaction.
This is needed for deferring the foreign key constraints
to the end of transaction.
This commit removes the 'alert' field from the payload for
Android via GCM/FCM.
The alert strings generated do not get used at all and have
not been used since at least 2019. On Android, we construct
the notification UI ourselves in the client, and we ignore
the alert string.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
The 'startinline' option is utilized in the `CodeHilite` instantiation
to indicate that the provided PHP code snippet should be highlighted
even if it doesn't start with the opening tag or marker of the
associated programming language (which will rarely be the case in
Zulip, since one discusses a section of a file much more often than a
whole file).
Fixes: https://chat.zulip.org/#narrow/stream/137-feedback/topic/php.20syntax.20highlighting.20should.20not.20require.20.60.3C.3Fphp.60.
Signed-off-by: Akshat <akshat25iiit@gmail.com>
This commit updates the 'get_apns_alert_subtitle' function to
return a common subtitle, i.e., "{full_name} mentioned everyone:"
for wildcard mentions.
The triggers for the stream or topic wildcard mentions include:
* NotificationTriggers.TOPIC_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.STREAM_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.TOPIC_WILDCARD_MENTION
* NotificationTriggers.STREAM_WILDCARD_MENTION
We now send stream creation and stream deletion events on
changing a user's role because a user can gain or lose
access to some streams on changing their role.
This commit extracts the code which queries the required streams
to a new function "get_user_streams". The new functions returns
the list of "Stream" object and not dictionaries and then
do_get_streams function converts it into list of dictionaries.
This change is important because we would use the new function
in further commit where we want list of "Stream" objects and
not list of dictionaries.
There was a bug in apply_event code where only a stream which
is not private is added to the "never_subscribed" data after
a stream creation event. Instead, it should be added to the
"never_subscribed" data irrespective of permission policy of
the stream as we already send stream creation events only to
those users who can access the stream. Due to the current
bug, private streams were not being added to "never_subscribed"
data in apply_event for admins as well. This commit fixes it
and also makes sure the "never_subscribed" list is sorted
which was not done before and was also a bug.
The bugs mentioned above were unnoticed as the tests did not
cover these cases and this commit also adds tests for those
cases.
The "streams" field in "/register" response did not include web-public
streams for non-admin users but the data for those are eventually
included in the subscriptions data sent using "subscriptions",
"unsubscribed" and "never_subscribed" fields.
This commit adds code to include the web-public streams in "streams"
field as well as everyone can access those and will make the "streams"
data complete.
The name and docstring were just wrong, having a UserMessage row isn't
sufficient for having message access and is actually only relevant in a
private stream with private history. The function is only used in a
single place anyway, in bulk_access_messages.
The comment mentioning this function in handle_remove_push_notification
can be tweaked to just not mention any function specifically and just
say why we're not checking message access.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Fixes#11767.
Previously multi-character emoji sequences weren't matched in the
emoji regex, so we'd convert the characters to separate images,
breaking the intended display.
This change allows us to match the full emoji sequence, and
therefore show the correct image.
We have modified the code to directly fetch realm from Message
object instead of "sender" field and thus we no longer need to
fetch "sender__realm" using select_related.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
Currently, we are displaying the "Complete the organization profile"
banner immediately after the organization was created. It's important to
strongly encourage orgs to configure their profile, so we should delay
showing the banner if the profile has not been configured after 15 days.
Thus also allows the users to check out Zulip and see how it works before
configuring the organization settings.
Fixes: #24122.
The number of affected objects may be quite high, and they are
selected by `id IN (...)` query, and updated with a giant `CASE`.
This turns out to be quadratic, and can cause large queries to take
hours, in a state where they cannot be terminated, when PostgreSQL >11
tries to JIT the query.
Set a batch_size as a stopgap performance fix before moving to
`.update()` as a real fix.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
We want to phase out the use of get_display_recipient
for streams, and this is the last place that I
eliminate it. The next commit will eliminate the
dead code and make mypy types tighter.
This change will make push notifications slightly
slower in some situations, but we avoid all the
complexity of a cache, and this code tends to run
offline.
We could always make this code a bit more efficient
by being a little smarter about what data we fetch
up front. For example, get_apns_alert_title gets
called by a function that already has the stream
name. It's just a bit of a pain to refactor when
you have all the DM codepath mucked up with the
stream codepath.
There's no need for the complexity and extra round
trips to call get_display_recipient in a testing
context.
We also eliminate the unnecessary call to check_string.
This function is poorly named, but that's a sweep
for another day.
The get_display_recipient helper is a clumsy way to get
stream names, and it's not even representative of how
most of our code retrieves stream names.
The new helper also double-checks that the Stream
object has the correct recipient id.
We no longer have to reason about the 12 possible
ways of invoking get_narrow_url. We also avoid
double computation in a couple places.
Finally, we get stricter type checks by just inlining
the calls.
We don't need to call get_display_recipient for
non-stream messages.
I will rename display_recipient in the next commit;
if I were to combine the steps the diff would be too
hard to read.
This commit renames the keyword 'pm' to 'dm' in the
'pm_mention_email_disabled_user_ids' and
'pm_mention_push_disabled_user_ids' attributes of the
'RecipientInfoResult' dataclass.
'pm' and 'dm' are the acronyms for 'private message' and
'direct message' respectively.
It includes 'TODO/compatibility' code to support the old format
fields in the tornado queues during the Zulip server upgrades.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.
This commit adds 'TODO/compatibility' code to support the
old notification trigger values in the push notification queue
during the Zulip server upgrades.
In f4fa82e, we renamed the following notification triggers:
* 'wildcard_mentioned' to 'stream_wildcard_mentioned'
* 'followed_topic_wildcard_mentioned' to
'stream_wildcard_mentioned_in_followed_topic'.
This should have been added in f4fa82e.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Subscription objects such that only
the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for MissedMessageEmailAddress such that
only the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
We only need ID of the recipient and not the full object, so we
directly access ID using "stream.recipient_id" instead of using
the complete recipient object.
This commit removes get_huddle_recipient function and we now use
get_or_create_huddle in get_recipient_from_user_profiles.
As a result of this change, we do not fetch the recipient from
Huddle object but instead get it using the "id" and "recipient_id"
fields available from Huddle object like we do for a personal
message. This change allows us to not fetch recipient object
using select_related when querying the Huddle object.
We now fetch recipient object when querying "Huddle" object in
get_or_create_huddle_backend as this query is eventually used
to get the recipient object only in get_huddle_recipient.
This commit also updates the select_related call in the code to
populate Huddle objects in cache to pass "Recipient" as argument.
Previously no argument was passed to select_related and thus no
related objects were being fetched, with no non-null related fields
being present.
- Adds instructions block and relative link to Starred messages.
- Adds "Toggle starred messages counter" subheading.
- Adds "Searching for messages" as a related article.
The set of objects in the `users` object can be very large (in some
cases, literally every object in the database) and making them into a
giant `id in (...)` to handle the one tiny corner case which we never
use is silly.
Switch the `--users` codepath to returning a QuerySet as well, so it
can be composed. We pass a QuerySet into send_custom_email as well,
so it can ensure that the realm is `select_related` in as well, no
matter how the QuerySet was generated.
Substituting the rendered body via Jinja2 means that it cannot
perform any interpolation itself. While the string replacement is
hacky, it is the only solution which avoids running Jinja2 more than
once, and also allows the user-supplied content to have Jinja2
substitutions in it.
In this commit, we introduce a new option in the stream creation
UI - a 'Default stream for new users' checkbox. By default, the
checkbox is set to 'off' and is only visible to admins. This
allow admins to easily designate a stream as the default stream
for new users during stream creation.
Fixes#24048.
Adds a test for when a value for a user's custom profile field is
removed and not set to a new value. The omission of this event in
the tests was noted as a possibility in #22103, which updated the
API documentation for these events having `null` for the field
value.
When adding the test discovered that the events logic was not
deleting the field from the user object and instead setting it to
`None`, so fixes that logic as well. There was a similar bug fixed
in commit 96c61a1a41 for when custom profile fields are removed
from a realm.
When applying realm_user update events, some of the event fields
for the person object were being updated to the same value in a
loop. Unnests those calls from the loop over the existing fields
so that they are only updated once.
The original nesting was introduced in commit 649fccde6b and
was expanded in other additions to the logic for these events.
This commit adds code to pass stream traffic data using
the "stream_weekly_traffic" field in stream objects.
We already include the traffic data in Subscription objects,
but the traffic data does not depend on the user to stream
relationship and is stream-only information, so it's better
to include it in Stream objects. We may remove the traffic
data and other stream information fields for Subscription
objects in future.
This will help clients to correctly display the stream
traffic data in case where client receives a stream
creation event and no subscription event, for an already
existing stream which the user did not have access to before.
This commit changes the code to not use get_client_data
function and instead use `stream_to_dict` function to
get the stream data in a dictionary form. This is a
prep commit add stream traffic data to Stream objects.
This commit adds stream_to_dict method which is same as
Stream.to_dict method as of now. This is a prep commit
to include stream traffic data in stream objects.
I move the helper user_ids_to_users to the only
place that it's used, and then I simplify it to
do a direct database query.
These endpoints aren't hit often enough to justify
caching complexity, and for really large user groups,
hitting the cache can actually be counterproductive.
Particularly when you add new users to an existing
group, the bulk of the cost is sending out
notification messages to users.
The only change to the test is that I added an
assertion on the query count.
The cross-realm bots rarely change, and there are only
a few of them, so we just query them all at once and
put them in the cache.
Also, we put the dictionaries in the cache, instead of
the user objects, since there is nothing time-sensitive
about the dictionaries, and they are small. This saves
us a little time computing the avatar url and things
like that, not to mention marshalling costs.
This commit also fixes a theoretical bug where we would
have stale cache entries if somebody somehow modified
the cross-realm bots without bumping KEY_PREFIX.
Internally we no longer pre-fetch the realm objects for
the bots, but we don't get overly precise about picking
individual fields from UserProfile, since we rarely hit
the database and since we don't store raw ORM objects
in the cache.
The test diffs make it look like we are hitting the
cache an extra time, but the tests weren't counting
bulk fetches. Now we only use a single key for all
bots rather a key per bot.
The bulk_get_users() function was only being used to
get cross-realm bots.
It appears that it was introduced in
f02e5b90f6 for that
specific use case.
Now we make the function more specific and test it more
accurately.
We also eliminate a lot of janky code and comments,
including some code that never had test coverage.
Incidentally, it appears that we did not have any code
to invalidate the cache keys here, and that is still
the case. In practice I assume people rarely
re-configure their cross-realm bots unless they are
upgrading the server, and then KEY_PREFIX comes into
play. 25fd4c5508 seems
to have caused that hopefully harmless regression.
A further step will be to make this cache more coarse,
since there are only a few cross-realm bots. The next
commit will hopefully simplify the code and address the
validation pitfall.
Earlier the API endpoints related to user_group accepts and returns a
field `can_mention_group_id` which represents the ID
of user_group whose members can mention the group.
This commit renames this field to `can_mention_group`.
Earlier the API endpoints related to streams accepts and returns a
field `can_remove_subscribers_group_id` which represents the ID
of user_group whose members can remove subscribers from stream.
This commit renames this field to `can_remove_subscribers_group`.
Dropping support for url_prefix for RealmPlayground, the server now uses
url_template instead only for playground creation, retrieval and audit
logging upon removal.
This does the necessary handling so that url_template is expanded with
the extracted code.
Fixes#25723.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the stray strings used to refer to
various types of notification triggers.
We use the attributes of the 'NotificationTriggers' class instead.
Updates the realm field `default_code_block_language` to have a default
value of an empty string instead of None. Also updates the web-app to
check for the empty string and not `null` to indicate no default is set.
This means that both new realms and existing realms that have no default
set will have the same value for this setting: an empty string.
Previously, new realms would have None if no default was set, while realms
that had set and then unset a value for this field would have an empty
string when no default was set.
Expands support for the message ID operand for id" operator to be either
a string or an integer. Previously, this operand was always validated as
a string.
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We used to access the complete objects for UserProfile foreign
keys like "bot_owner" and "default_sending_stream", where we only
needed ID of them.
This commit fixes some of such instances and now we directly get
the id using "bot_owner_id" and "default_sending_stream_id" so
that we can avoid the unnecessary complexity of accessing the
complete object.
This commit updates code to pass "realm" and "bot_owner" args to
select_related call in get_users. We pass "realm" and "bot_owner"
args to get_users because the caches which this function is used
to populate are used for get_user and get_user_profile_by_api_key
functions and they also select both these fields when querying for
UserProfile objects.
This commit updates the select_related calls in queries to
get UserProfile objects in get_user function called in
management commands to pass "realm" as argument to
select_related call.
There are some management commands like deactivate_user,
change_full_name, etc. which might need fields like
"default_sending_stream" when changing full name of a bot
or something similar, but we don't think that would happen
often and we can afford to have a DB round trip to get
these fields if needed.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are
selected in case we add more foreign keys to UserProfile
in future.
This commit updates select_related call to pass "realm" as
argument in select_related call in fetch_users_by_id function
as we only require realm for the UserProfile objects fetched
using fetch_users_by_id.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
We do not use any related fields for the UserProfile objects
fetched by get_active_users, so we can simply remove the
select_related call.
The user object from get_active_users was used to get realm
but since get_active_users called from a realm object we can
directly use that realm object. This change also leads to
some changes in the cache code where we now pass the realm
to the function instead of selecting it from UserProfile object.
This commit removes select_related call from
get_soft_deactivated_users_for_catch_up as
we do not use any related fields for the
UserProfile objects fetched using this call.
Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
‘blocklist’ was added in 0.0.35 (with backwards compatibility for the
old name), and type annotations were added in 0.0.91 (with only the
new name).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This code removes a lot of complexity with very likely
positive overall impact on system performance and
negligible downside.
We already cache display recipients on a per-user
level, so there's no need for another cache layer on
top of that that keys them with recipient ids.
We avoid strange things where Alice/Bob and Bob/Charlie
get put into the top layer cache and then we still have
a cache miss on Alice/Charlie despite the lower level
cache being able to support per-user lookups.
This change does introduce an extra database round trip
if any of our messages have a huddle, but the query is
extremely cheap, and we can always try to cache that
function more directly or try to re-use some of our
other huddle-based caches.
As part of this, we clean up the names for the
lower-level per-user cache of display recipients, and
we simplify the cache keys.
We also stop passing in a full Recipient object to the
`bulk_get_huddle_user_ids` functions.
The local impact of this change should be easy to
measure (at least approximately), since we use this
function every time a user gets messages via the
/messages endpoint.
The only overlap between how we fetched streams and
users was to share some really complicated data
structures.
We can also short-circuit some logic if a message
batch is either all-stream or all-DM.
We restrict the columns, avoid quadratic looping,
and don't bother with order_by.
We also return the user ids (per recipient) as
sets, since that's how the only caller uses the
info (albeit implicitly via set.union accepting
a list).
It’s unclear what was supposed to be “safe” about this wrapper. The
hashlib API is fine without it, and we don’t want to encourage further
use of SHA-1.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
- Adds `message_handle_match` function to handle new pattern for
relative help links to "Drafts" and "Scheduled messages" for logged-in
users: `{relative|message|drafts}` and `{relative|message|scheduled}`.
`remove_denormalized_recipient_column_from_data` removes the
`recipient` data from `zerver_userprofile`, but did not remove it from
`zerver_userprofile_mirrordummy`, which was later appended to the list
of `zerver_userprofile` objects. This led to failure when inserting,
as the mirrordummy objects still tried to reference their previous
`recipient_id`s.
Move the merging of the two sets earlier, before we call
`remove_denormalized_recipient_column_from_data`.
If there are two huddles, with users A + B + C + D and A + B + C, and
user D is deleted, it is replaced with a mirrordummy user. If
mirrordummy subscriptions are not included in exports, then the two
huddles have duplicate member sets, and will not be able to be
imported successfully.
Include huddle subscriptions for mirrordummy users in exports.
The long-term idle topic participants are soft-reactivated
after email/push notifications are sent due to @topic mention.
The reason being that, generally, @topic mentions are going to
reach a small set of users who have a decent chance of being
reactivated by the notifications.
This commit completes the notifications part of the @topic
wildcard mention feature.
Notifications are sent to the topic participants for the
@topic wildcard mention.
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
This migration is reasonably complex because of various anomalies in existing
data.
Note that there are cases when extra_data does not contain data that is
proper json with possibly single quotes. Thus we need to use
"ast.literal_eval" to cover that.
There is also a special case for "event_type == USER_FULL_NAME_CHANGED",
where extra_data is a plain str. This event_type is only used for
RealmAuditLog, so the zilencer migration script does not need to handle
it.
The migration does not handle "event_type == REALM_DISCOUNT_CHANGED"
because ast.literal_eval only allow Python literals. We expect the admin
to populate the jsonified extra_data for extra_data_json manually
beforehand.
This chunks the backfilling migration to reduce potential block time.
The migration for zilencer is mostly similar to the one for zerver; except that
the backfill helper is added in a wrapper and unrelated events are
removed.
**Logging and error recovery**
We print out a warning when the extra_data_json field of an entry
would have been overwritten by a value inconsistent with what we derived
from extra_data. Usually this only happens when the extra_data was
corrupted before this migration. This prevents data loss by backing up
possibly corrupted data in extra_data_json with the keys
"inconsistent_old_extra_data" and "inconsistent_old_extra_data_json".
More roundtrips to the database are needed for inconsistent data, which are
expected to be infrequent.
This also outputs messages when there are audit log entries with decimals,
indicating that such entries are not backfilled. Do note that audit log
entries with decimals are not populated with "inconsistent_old_extra_data_*"
in the JSONField, because they are not overwritten.
For such audit log entries with "extra_data_json" marked as inconsistent,
we skip them in the migration. Because when we have discovered anomalies in a
previous run, there is no need to overwrite them again nesting the extra keys
we added to it.
**Testing**
We create a migration test case utilizing the property of bulk_create
that it doesn't call our modified save method.
We extend ZulipTestCase to support verifying console output at the test
case level. The implementation is crude but the use case should be rare
enough that we don't need it to be too elaborate.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This tracks user group membership changes when the realm is first set
up, either through an import or not. This happens when we add users to
the system user groups by their roles.
For an imported realm, we do extra handling when the data doesn't include
user groups. This gets audited as well.
Django seems to have an aggressive check on the type of a field when
setting it through an relation, requiring the argument to be a UserGroup in
our case.
Reference:
02966a30dd/django/db/models/base.py (L537-L546)
The MissedMessage queue worker is the single callsite of
`handle_missedmessage_emails`, which immediately transforms the list
of events into a dict keyed by message-id.
Skip the intermediate list step, and use defaultdict and a dataclass
to simplify and make explicit the pieces. This removes the unused
user_profile_id and message_id pieces of the data structure.
This commit adds a boolean field `mentions_topic_wildcard`
to the `MessageRenderingResult` dataclass.
The field is set to true only if message rendering determines
the message has an actual topic wildcard mention in it (and not,
e.g., topic wildcard mention syntax inside a code block).
The rendered content for topic wildcard mention is
'<span class="topic-mention">{wildcard}</span>'.
The 'topic-mention' class is the identifier for the wildcard
mention being a topic wildcard mention.
We don't use 'data-user-id="*"' and "user-mention" class for
topic wildcard mentions and eventually plan to remove them for
stream wildcard mentions too in a separate mini-project.
This commit adds the 'topic_wildcard_mention_user_ids' and
'topic_wildcard_mention_in_followed_topic_user_ids'
attributes to the 'RecipientInfoResult' dataclass.
Only topic participants are notified of @topic mentions.
Topic participants are anyone who sent a message to a topic
or reacted to a message on the topic.
'topic_wildcard_mention_in_followed_topic_user_ids' stores the
ids of the topic participants who follow the topic and have
enabled the wildcard mention notifications for followed topics.
'topic_wildcard_mention_user_ids' stores the ids of the topic
participants for whom 'user_allows_notifications_in_StreamTopic'
with setting 'wildcard_mentions_notify' returns True.
This commit adds a 'has_topic_wildcards' instance variable
to the 'MentionData' class for the detection of
- possible topic wildcards mentions.
Fixes part of #22829.
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
Co-authored-by: orientor <aditya.verma@students.iiit.ac.in>
This includes changing the URL to #settings/preferences, with a
transparent redirect so that existing links, like the one from Welcome
Bot, continue to work.
Pass the HttpRequest explicitly through the two webhooks that log to
the webhook loggers.
get_current_request is now unused, so remove it (in the same commit
for test coverage reasons).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mostly the same as tracking subgroup changes, except that now
modified_user_group is the subgroup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It's worth noting that instead of adding another field to the
RealmAuditLog model, we store the modified subgroup ids in extra_data as
a JSON encoded dict with the key "subgroup_ids". We don't create audit
log entries for supergroup changes at this point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This also add audit log entries during user creation and role change,
because we modify system group memberships there.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We also create RealmAuditLog entries for the initial memberships that
get added along with the creation of a UserGroup. System user groups are
not created with members so no audit logs are populated for that.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Basically, I eliminate the use of select_all() in a query
that still makes a single round trip. We have good test
enforcement that Django never needs to lazily fetch
objects off the Stream object. (It used to be common
to fetch stream.realm a while back, but we upgraded
bulk_add_subscription, in particular, a while back.)
At least as measured by test_events.py, which has over 1000
calls to fetch initial data for page loads, this should
be about a 10% improvement in how much time the server
spends fetching data.
We mostly avoid a select_related() query that did this nastiness:
INNER JOIN "zerver_realm" ON ("zerver_stream"."realm_id" = "zerver_realm"."id")
INNER JOIN "zerver_usergroup" ON ("zerver_stream"."can_remove_subscribers_group_id" = "zerver_usergroup"."id")
INNER JOIN "zerver_realm" T4 ON ("zerver_usergroup"."realm_id" = T4."id")
INNER JOIN "zerver_usergroup" T5 ON ("zerver_usergroup"."can_mention_group_id" = T5."id")
INNER JOIN "zerver_realm" T6 ON (T5."realm_id" = T6."id")
INNER JOIN "zerver_usergroup" T7 ON (T5."can_mention_group_id" = T7."id")
INNER JOIN "zerver_realm" T8 ON (T7."realm_id" = T8."id")
INNER JOIN "zerver_usergroup" T9 ON (T7."can_mention_group_id" = T9."id")
INNER JOIN "zerver_realm" T10 ON (T9."realm_id" = T10."id")
INNER JOIN "zerver_usergroup" T11 ON (T9."can_mention_group_id" = T11."id")
WHERE "zerver_stream"."id" IN (SELECT U0."stream_id" FROM "zerver_defaultstream" U0 WHERE U0."realm_id" = 2
Future commits will address the codepath for creating users.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
Updates find_proper_insertion_index to check for the inline image
classes as matching at least one of the classes in the element's
attrib["class"] so that cases where an inline preview image has
multiple classes, like YouTube video previews, will have the
correct insertion index.
Fixes#26186.
By relocating helper methods into a mixin class, we can be more flexible
with managing transactions in test cases, without always forcing the
django.test.TestCase behavior of always putting the test case into an
atomic transaction.
We include a check for side effects in ZulipTransactionTestCase. It only
checks for the set of row ids in all tables before and after each test.
It is not a comprehensive check for side effects, but should be
sufficient for the basics without much performance overhead.
This prep commit replaces the 'wildcard' keyword in the codebase
with 'stream_wildcard' at some places for better readability, as
we plan to introduce 'topic_wildcards' as a part of the
'@topic mention' project.
Currently, 'wildcards = ["all", "everyone", "stream"]' which is an
alias to mention everyone in the stream, hence better renamed as
'stream_wildcards'.
Eventually, we will have:
'stream_wildcard' as an alias to mention everyone in the stream.
'topic_wildcard' as an alias to mention everyone in the topic.
'wildcard' refers to 'stream_wildcard' and 'topic_wildcard' as a whole.
The 'get_gcm_alert' and 'get_apns_alert_subtitle' functions
don't include the case when the trigger is
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
This commit updates the functions to include
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.