These metadata are essentially all publicily available anyway, and
making uploading them unconditional will simplify some things.
The documentation is not quite accurate in that it claims the server
will upload some metadata that is not actually uploaded yet (but will
by soon). This seems harmless.
Currently, the sender names for outgoing emails sent by Zulip
are hardcoded. It should be configurable for self-hosted systems.
This commit makes the 'Zulip' part a variable in the following
email sender names: 'Zulip Account Security', 'Zulip Digest',
and 'Zulip Notifications' by introducing a settings variable
'SERVICE_NAME' with the default value as f"{EXTERNAL_HOST} Zulip".
Fixes: #23857
This prep commit replaces `_` with `ignored` to represent
an unused variable.
In later commits, we are going to use `_` for translation,
which leads to a lint error.
This commit updates get_fake_email_domain to accept realm.host as
argument instead of the Realm object since we only use realm.host
to get the fake email domain.
This is a preparatory commit for the limited guest feature as we
would be sending the fake email of the message sender in message
event object to a guest user who cannot access the sender and
there we would need to compute the fake email.
549dd8a4c4 changed the regex that we build to contain whitespace for
readability, and strip that back out before returning it.
Unfortunately, this also serves to strip out whitespace in the source
linkifier, causing it to not match expected strings.
Revert 549dd8a4c4.
Fixes: #27854.
Previously, we weren't able to mute the cross realm bots. This was
because, for muting the users, we access only those profiles which are
in realm, excluding the cross realm system bots.
This is fixed by replacing the access_user_by_id method with a new
method access_user_by_id_including_cross_realm for this specific test.
Fixes#27823
Earlier, for the push notifications having latex math
like "$$1 \oplus 0 = 1$$, the notification had the math
included multiple times.
This commit fixes the incorrect behavior by replacing
the KaTeX with the raw LaTeX source.
Fixes part of #25289.
This commit refactors the current hotspot subsytem to use a more
robust dataclass `Hotspot` defined in `lib/hotspots.py`. This fixes
mypy errors as well as make code more readable.
This commit introduces non-intro hotspots.
They are a bit different than intro hotspots in the
following ways:
* All the non-intro hotspots are sent at once instead of
sending them one by one like intro hotspots.
* They only activate when a specific event occurs,
unlike intro hotspot where they activate after the
previous hotspot is read.
Now, the topic wildcard mention follows the following
rules:
* If the topic has less than 15 participants , anyone
can use @ topic mentions.
* For more than 15, the org setting 'wildcard_mention_policy'
determines who can use @ topic mentions.
Earlier, topic wildcard mentions followed the same restriction
as stream wildcard mentions, which was incorrect.
Fixes part of #27700.
This makes it possible for a self-hosted realm administrator to
directly access a logged-page on the push notifications bouncer
service, enabling billing, support contacts, and other administrator
for enterprise customers to be managed without manual setup.
has_billing_access already has the is_realm_owner check:
@property
def has_billing_access(self) -> bool:
return self.is_realm_owner or self.is_billing_admin
We did not remove the objects for deactivated streams from
subscriptions field in apply_event. We need to do this because
we do not send "subscription/remove" events to subscribers
when deactivating streams.
Guests might lose access to deactivated users if the user
is not involved in any DM with guest. This commit adds
code to send "realm_user/remove" events for such cases.
We now send user creation events to recipient users
when sending DMs if recipients gain access to either
sender or other pariticpating users in the DM.
This commit adds code to send "realm_user/remove" event
when a guest user loses access to a user due to the user
being unsubscribed from one or more streams.
This commit adds code to send user creation events to
guests who gain access to new subscribers and to the
new guest subscribers who gain access to existing
stream subscribers.
The presence and user status update events are only sent to accessible
users, i.e. guests do not receive presence and user status updates for
users they cannot access.
This commit adds code to make sure that update events for changing
a user's role, email, etc. are not sent to guests who cannot access
the modified user.
This commit adds code to update access_user_by_id to raise
error if guest tries to access an inaccessible user.
One notable behavioral change due to this is that we do
not allow guest to mute or unmute a deactivated user if
that user was not involved in DMs.
This reduces the giant load spike at 5 minute past the hour, when all
remote servers currently attempt to submit their records.
We do not wish to slew over a full hour, because we want to ensure
that we do not hold the lock when the next hour's analytics runs. It
is also not necessary to have that much variation; 10 minutes is
picked as an arbitrary "long enough" time to spread requests over.
Earlier, for the emails having latex math like
"$$d^* = +\infty$$", the bad rendering led to the math
being included multiple times in the email body.
This was due to displaying KaTeX HTML without the CSS.
This commit fixes the incorrect behavior by replacing
the KaTeX with the raw LaTex source.
Fixes part of #25289.
This is a useful helper using the same API as
send_analytics_to_push_bouncer(), but uploading only realms info. This
is useful to upload realms info without the risk of taking a long time
to process the request due to too much of the *Count analytics data.
These were written before the draft endpoints were converted to use
@typed_endpoint and pydantic-based DraftData(BaseModel) for param
validation. Update them to avoid the confusion of talking about dicts
and dict_validator functions when those are no longer a thing.
This reverts commit 091e2f177b.
This version of python_to_js_linkifier fails for at least some real
linkifiers. We'll likely re-introduce this after a bit more debugging.
This makes it possible to send notifications to more than one app ID
from the same server: for example, the main Zulip mobile app and the
new Flutter-based app, which has a separate app ID for use through its
beta period so that it can be installed alongside the existing app.
This commit adds code to send stream deletion events when
unsubscribing non-admin users from private streams and
when unsubscribing guests from public streams since
non-admins cannot access unsubscribed private streams
and guests cannot access unsubscribed public streams.
It was discovered by the Zulip development team that active users who
had previously been subscribed to a stream incorrectly continued being
able to use the Zulip API to access metadata for that stream. As a
result, users who had been removed from a stream, but still had an
account in the organization, could still view metadata for that
stream (including the stream name, description, settings, and an email
address used to send emails into the stream via the incoming email
integration). This potentially allowed users to see changes to a
stream’s metadata after they had lost access to the stream.
This bug was present in all Zulip releases prior to today's Zulip
Server 7.5.
This commit removes "email_address" field from Subscription objects
and we would instead a new endpoint in next commit to get email
address for stream with proper access check.
This change also fixes the bug where we would include email address
for the unsubscribed private stream as well when user did not have
permission to send message to the stream, and having email allowed
the unsubscribed user to send message to the stream.
Note that the unsubscribed user can still send message to the stream
if the user had noted down the email before being unsubscribed
and the stream token is not changed after unsubscribing the user.
Since the server-side implementation no longer uses look-ahead
or (more importantly) look-behind, it is possible to exactly implement
in Javascript. This removes a common class which would prevent local
echo.
This requires reworking the topic linking algorithm, to march the
server's as well. The tests and behaviour are adjusted in so doing --
previously, the JS implementation would have linked `#foo` with a
`foo` regex on the linkifier, but the server implementation would not
have.
This cache was only used in one place, which is infrequently
called (only when sending messages, or searching explicitly for a list
of users) and the overhead of maintaining the cache is not worth
trying to avoid the well-indexed lookup of the huddle.
We now pass bogus data for inaccessible users when sending
the users data in "realm_users" field of "register" response
or when using endpoints like "GET /users" to get data of
all the users in realm.
We would add a client capability field in future commits
such that new clients would receive data only for accessible
users and they can form the bogus data by themselves.
This commit adds new setting for controlling who can access
all users in the realm which would have "Everyone" and
"Members only" option.
Fixes part of #10970.
This is a CountStat for tracking how many mobile notifications the
server requested.
1. On a self-hosted server, that means requesting from the push bouncer.
2. On a server that's its own push bouncer, that's just the number
directly sent.
This number has room for inaccuracy due to incrementing by the number of
user devices on a self-hosted server, as it doesn't account for errors
that may occur in the GCM/APNs low-level sending codepaths on the bouncer.
Also tests that a server that's its own push bouncer correctly
increments its mobile_pushes_sent::day CountStat, by basing it on the
values returned from the send_apple/android_push_notification functions
which tell us the actual number of successfully sent notifications.
Since the return values of send_..._push_notification are now
used in those codepaths, we need to tweak our mocks in some unrelated
tests to set up some return value to avoid errors.
Rename the existing 'wildcard_mentioned' flag to
'stream_wildcard_mentioned'.
The 'wildcard_mentioned' flag is deprecated and exists for
backwards compatibility.
We have two separate flags for stream and topic wildcard mentions,
i.e., 'stream_wildcard_mentioned' and 'topic_wildcard_mentioned',
respectively.
* stream wildcard mentions: `@all`, `@everyone`, and `@stream`
* topic wildcard mentions: `@topic`
The `wildcard_mentioned` flag is included in the events and
API response if either `stream_wildcard_mentioned` or
`topic_wildcard_mentioned` is set.
Add the new model for recording basic information about Realms on remote
server, to go with the other analytics data. Also adds necessary changes
to the bouncer endpoint and the send_analytics_to_push_bouncer()
function to submit such Realm information.
Previously, when a deactivated user was mentioned, he wasn't
rendered as a Pill. This is because the dataset for validating mentions
only included active users, which is fixed by removing that filter.
To allow only silent mentions of them, an extra is_active property
added to FullNameInfo class, which is populated from the query,
which tells if user is deactivated. This is used to convert any
mentions of them to silent mentions in the backend markdown.
Fixes#26857
This commit updates format_user_row to return a TypedDict.
This commit is a prep commit for feature of restricting user
access such that code can be easy to read and understand when
we add that feature.
This commit updates user_profile_to_user_row to return a TypedDict
and also updates the return type of get_realm_user_dicts to be a
TypedDict.
This commit is a prep commit for feature of restricting user
access such that code can be easy to read and understand when
we add that feature.
This is a prep commit for adding feature of restricting
user access to guests such that we can keep the code
easy to read and understand when that feature is added.
Earlier, email message notifications included prior messages sent
to the same topic for context. This is more confusing than helpful
for messages that the user is likely to have received notifications
for all the prior messages in the conversation already (or read them
in the Zulip UI).
Now, we include prior context only when the user is mentioned via
personal, group, stream or topic wildcard mention.
Fixes#27479.
The event for stream typing notifications is no longer sent
to the long_term_idle subscribers of the stream.
This helps to reduce the tornado's work of parsing super-long
JSON-encoded lists of user IDs in large streams. Now the lists
are shorter.
This will be used in gear menu to inform admin of their
sponsorship application status.
This includes some additional tweaks for when to show
billing and plans to users.
- Replaces the "Via Markdown" tab with "Via drag-and-drop", and
modifies the instructions to explain that you can drag and drop
anywhere in the app, whether or not the compose box is open.
- Adds "Via paste" tab for the copy-pasting instructions.
Fixes#26894.
Previously, cross realm bots were not displayed as mention Pills.
This is because, the data set for validating mentions considers
only the realm id which is None in case of cross realm bots.
Hence, adding an or Q object to it, to also check if
the email is a part of the cross realm bots email, in case the
realm id returns None.
Fixes#26913
Earlier, the 'wildcard_mentioned' flag was set for both the
stream and topic wildcard mentions.
Now, the 'topic_wildcard_mentioned' flag is set for topic
wildcard mentions, and the 'wildcard_mentioned' flag is set for
stream wildcard mentions.
We will rename the 'wildcard_mentioned' flag to
'stream_wildcard_mentioned' in a later commit.
This commit renames the two unused and historical bits of the
'fields' bitfield of the 'UserMessage' and 'ArchivedUserMessage'
tables.
* 'summarize_in_home' to 'topic_wildcard_mentioned'
* 'summarize_in_stream' to 'group_mentioned'
The 'group_mentioned' flag doesn't affect the feature,
but completing the work here helps to save future migration
and indexing efforts on the UserMessage table, as we plan to
use this flag in the future for group mentions.
The unused bits may have old data; we'll clear that in
a separate commit.
It creates the 'zerver_usermessage_any_mentioned_message_id'
index concurrently.
We now send "realm_user/update" (and "realm_bot/update" for bots)
events with "is_active" field when deactivating and reactivating
users, including bots.
We would want to use "remove" event for a user losing access
to another user for #10970, so it is better to use "update"
event for deactivation as we only update "is_active" field
in the user objects and the clients still have the data for
deactivated users.
Previously, we used to send "add" event for reactivation along
with complete user objects, but clients should have the data
for deactivated users as well, so an "update" event is enough
like we do when deactivating users.
This commit adds code to pass configuration objects for group
permission settings in register response to clients such that
we do need to duplicate that data in clients and can avoid
future bugs due to inconsistency.
The "server_supported_permission_settings" field is included
in the response if "realm" is present in "fetch_event_types",
as this is what we do for other server-related fields.
This commit moves constants for system group names to a new
"SystemGroups" class so that we can use these group names
in multiple classes in models.py without worrying about the
order of defining them.
We now pass the complete configuration object for a setting to
access_user_group_for_setting instead of passing the configuration
object's fields as different variables.
This commit renames permissions_configuration variable to
permission_configuration since the object contains config for
a single permission setting and thus permission_configuration
seems like a better name.
Previous behavior-
- Guest did not receive stream creation events for new
web-public streams.
- Guest did not receive peer_add and peer_remove events
for web-public and subscribed public streams.
This commit fixes the behavior to be -
- Guests now receive stream creation events for new
web-public streams.
- Guest now receive peer_add and peer_remove events for
web-public and subscribed public streams.
We use `Realm.default_language` value, which is set by selecting
the 'Organization language', to internationalize the introductory
messages of the initial streams.
Fixes#25729.
In this commit, we add a new dropdown 'Organization language' on
the `/new` and `/realm/register` pages. This dropdown allows setting
the language of the organization during its creation. This allows
messages from Welcome Bot and introductory messages in streams to be
internationalized.
Fixes a part of #25729.
Adds a new onboarding email `onboarding_team_to_zulip` for the user
who created the new Zulip organization.
Co-authored by: Alya Abbott <alya@zulip.com>
The comment has drifted away from where it should be placed within the
code and also talks about RealmCounts specifically, while we have other
object types that this equally applies to.
The former name is kind of misleading - this function is for the remote
server to send analytics to the push bouncer. Under our usual
terminology, a "remote server" is a self-hosted Zulip server. So data is
sent FROM not TO a remote server.
Documents the procedure to subscribe / unsubscribe a user via their
profile and general stream settings.
Both methods are separated into tabs in the documentation.
Fixes#26902
Originally, this was how the notification emails worked, but that was changed
in 797a7ef97b, with this old behavior
available as an option.
The footer and from address of emails that are sent when this
setting is set to True are confusing, especially when more people
are involved in a stream and since we have changed the way we send
emails, it should be removed. It’s also not widely used.
Fixes#26609.
This commit renames "default" views to "home" views in the setting
labels, keyboard shortcuts list, help documentation and its urls.
This commit does not do changes in variable and class names, setting
field in database, API docs and changelog.
Fixes part of #27251.
Add an optional `automatic_new_visibility_policy` enum field
in the success response to indicate the new visibility policy
value due to the `automatically_follow_topics_policy` and
`automatically_unmute_topics_in_muted_streams_policy` user settings
during the send message action.
Only present if there is a change in the visibility policy.
- Adds desktop/web instructions.
- Adds #inbox relative link for logged-in users.
- Moves Inbox up in the left sidebar just under "Reading strategies".
- Moves Inbox article content to Markdown include.
- Adds "From the Inbox view" section to "Finding a topic to read",
"Getting started with Zulip", and "Reading strategies".
- Documents Inbox as a new option for the default web app view.
- Removes unused Markdown link.
- Tweaks subheading to better match help center patterns.
- Add Inbox option in "Configure default settings for new users".
- Adds new tabbed section and instructions for marking messages as
read and reading topics via the Inbox view.
Fixes#26903.
Co-authored-by: Alya Abbott <alya@zulip.com>
This is a prep commit that extracts the following two methods
from '/actions/scheduled_messages' to reuse in the next commit.
* extract_stream_id
* extract_direct_message_recipient_ids
The 'to' parameter for 'POST /typing' will follow the same pattern
in the next commit as we currently have for the 'to' parameter in
'POST /scheduled_messages', so we can reuse these functions.
This commit replaces the value `private` with `direct` in the
`message_type` field for the `typing` events sent when a user
starts or stops typing a message.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
Earlier, when we used 'self.send_message()' in the backend tests,
the sent message was not marked as read for the sender.
Reason: To set the read flag, we have to check if
'message.sent_by_human()'. It returns False because the
'sending_client' for tests is "test suite" and the 'sent_by_human'
function doesn't enlist the "test suite" client name as a human client.
This commit adds "test suite" to that list.
Also fixes a bug in when apply_unread_message_event was called that
was revealed by this change.
Updates the testing for draft event schemas to be fully checked by
`zerver/tests/test_events.py` and `tools/check-schema`.
Also, corrects the type for the timestamp field in Draft objects
in the OpenAPI documentation.
Updates the testing for scheduled message event schemas to be fully
checked by `zerver/tests/test_events.py` and `tools/check-schema`.
Adds the missing 'failed' field to the scheduled message events
in `web/tests/lib/events.js` as well.
We add `Content-Disposition: inline` header to commonly supported
video MIME types so that when we `Open` them in lightbox, they
play in new tab.
This will require a follow-up database migration to apply to
previously uploaded videos.
This excludes the legacy webhook from the
"realm_incoming_webhook_bots" object as those do not have the same URL
format as modern webhook integrations.
This fixes a regression introduced in
9954db4b59, where the realm's default
language would be ignored for users created via API/LDAP/SAML,
resulting in all such users having English as their default language.
The API/LDAP/SAML account creation code paths don't have a request,
and thus cannot pull default language from the user's browser.
We have the `realm.default_language` field intended for this use case,
but it was not being passed through the system.
Rather than pass `realm.default_language` through from each caller, we
make the low-level user creation code set this field, as that seems
more robust to the creation of future callers.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
This reverts b8581e2895. The mobile
client on Android parses this field using:
```kotlin
timeMs = data.require("time").parseLong("time") * 1000
```
This throws an error if value is not `long` (i.e. an integer),
resulting in dropped notifications on Android from servers which had
deployed b8581e2895.
Switch back to sending an integer, but keep the behaviour from
fd6091ad17 where we send the timestamp in the payload of both
Android and Apple push notifications.
Rather than fetch all UserMessage rows for all streams, and subtract
those out in Python-space from the list of all Message rows the user
may have received -- do this via a "NOT EXISTS" subquery. This is
much better indexed (performing in fractions of milliseconds rather
than hundreds), and also consumes much less memory.
This kind of payload that's loaded from json in the body of the request
is not only used for webhooks, but also in the push bouncer, and may get
used elsewhere too - so a general name is better.
Earlier, 'is_row_muted' returned 'true' if the message was in
a muted stream or muted topic.
If the message is in an unmuted or followed topic in a muted
stream, such topics should be treated as not muted topics
in an unmuted stream.
This commit fixes the incorrect behavior.
Now, for wildcard mentions, 'unread_msgs.mentions' exclude
the IDs in muted streams only if the message is in default or
muted topic.
Also, 'unread_msgs.count' takes into account the unreads in unmuted
or followed topics in muted streams too.
Documents that this bug was fixed in the API changelog.
Update 'get_muted_stream_ids' to return a set of IDs
instead of a list.
This will help to avoid linear time search operations later
while using 'if stream_id in muted_streams_ids'.
This prep commit renames the 'build_topic_mute_checker' function
to 'build_get_topic_visibility_policy' and updates it to support
all the visibility policies.
The function prefetches the visibility policies the user has
configured for various topics and prepares a dict named
'topic_to_visibility_policy' to be used later on.
These queries benefit from the increased specificity of using the
realm / recipient / sender indexes. The argument from 11a1cb9630
does not apply in these cases, since there are only 2 usermessage rows
for each matching message row for DMs, and few more than that for
huddles.
This query has two halves; messages set by the user, and messages
received by the user. The former uses the already-specific
usermessage privatemessage flag index; the latter relies on the
recipient index on messages.
Add the realm_id to the latter half, so that the recipient_id is
paired with the realm_id.
Clarifies that the `all` field in the `op: "add"` event is only
relevant for the `"read"` message flag, and that it will be false
for all other specified flags in theses events.
Deprecates the `all` field in the `op: "remove"` event and document
that it is false for all specified flags.
Updates the deprecated `operation` field description and makes
a few other small revisions to the event text for clarity and
accuracy.
This commit adds a `jitsi_server_url` field to the Realm model, which
will be used to save the URL of the custom Jitsi Meet server. In
the database, `None` will encode the server-level default. We can't
readily use `None` in the API, as it could be confused with "field not
sent". Therefore, we will use the string "default" for this purpose.
We have also introduced `server_jitsi_server_url` in the `/register`
API. This will be used to display the server's default Jitsi server
URL in the settings UI.
The existing `jitsi_server_url` will now be calculated as
`realm_jitsi_server_url || server_jitsi_server_url`.
Fixes a part of #17914.
Co-authored-by: Gaurav Pandey <gauravguitarrocks@gmail.com>
The unique index on `(user_id, message_id)` that is the
`zerver_usermessage` table is rather specific, and even the PostgreSQL
extended statistics are not enough for it to realize there is a
correlation between the `realm_id` in the message table and the
`user_id` in the usermessage table. This means that adding the
`realm_id` limit when there is a join to `zerver_usermessage` flips
the query plan from a nested loop of unique usermessage index-only
scan, with an index scan of the messages pkey -- to a parallel hash
join of the messages limit with a index scan of just the user_id limit
on usermessages. It thinks this is necessary because it thinks that
the `realm_id` limit may remove a large number of messages from the
usermessage set -- which is totally untrue.
Remove the `realm_id` limit if we have a usermessage join.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The `expected` flag was incredibly confusing, as you
couldn't tell from the calling code what you were
actually expecting to happen.
I avoid the context manager idiom in order to force
the callers to create simple helper functions, and
I de-duplicate some code in some places.
I also force the caller to explicitly soft-deactivate
the user with one simple line of code, so that the
person reading the test doesn't have to research
the side effects of the helper. (And I make it
very easy for new authors to follow the practice
going forward.)
This is also somewhat of a prep commit to avoid
the obfuscated use of refresh_from_db.
The get_user function is poorly named, but I don't want to
sweep the entire codebase yet.
It's also nice to have a test wrapper for little experiments
like profiling tests or hunting down calls to refresh_from_db.
It's possible that we would also just change the new wrapper
to more directly call Django. The `get_user` function isn't
used in a ton of real-world places, so we might want the test
code to just bypass the cache.
I add a bunch of cute helper methods to make
the test a bit more readable.
And then I make sure to get clean objects,
which precludes the need for our callback
functions to refresh the user objects.
And finally I make sure that our validation
functions don't cause any round trips (assuming
we have fetched objects using a standard
Zulip helper, which example_user ensures.)
In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.
Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.
So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).
The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.
The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
This adds support for syncing user role via the newly added "role"
attribute, which can be set to either of
['owner', 'administrator', 'moderator', 'member', 'guest'].
Removes durable=True from the atomic decorator of do_change_user_role,
as django-scim2 runs PATCH operations in an atomic block.
Since the cache is flushed when the cutoff or realm changes, the
maximum size of the cache should cap out at the number of streams in
the realm. Raise the max cache size, now that this will not simply
lead to useless cache space for smaller servers.
There is now no longer any reason to have the scheduled_email
enqueuing wait until all of the users' contexts have been generated.
Switch to returning the contexts as an iterator, and send them as we
compute them.
The query plan for fetching recent messages from the arbitrary set of
streams formed by the intersection of 30 random users can be quite
bad, and can descend into a sequential scan on `zerver_recipient`.
Worse, this work of pulling recent messages out is redone if the
stream appears in the next batch of 30 users.
Instead, pull the recent messages for a stream on a one-by-one basis,
but cache them in an in-memory cache. Since digests are enqueued in
30-user batches but still one-realm-at-a-time, work will be saved both
in terms of faster query plans whose results can also be reused across
batches.
This requires that we pull the stream-id to stream-name mapping for
_all_ streams in the realm at once, but that is well-indexed and
unlikely to cause performance issues -- in fact, it may be faster
than pulling a random subset of the streams in the realm.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Matching the topic exactly, as opposed to case-insensitively, is not a
common operation, and one that we want to make difficult to do
accidentally. Inline the single use case of it.
This algorithm existed in multiple places, with different queries.
Since we only access properties in the UserMessage table, we
standardize on the much simpler and faster Index Only Scan, rather
than a merge join.
When searching for links inside a topic name, the question mark (?)
was used to split the topic. If a URL had a query after the URL
(e.g., "?foo=bar"), then the query was trimmed from the URL.
Removing the question mark from `basic_link_splitter` is sufficient
to fix this issue. The `get_web_link_regex` function then removes
the trailing punctuation if any, including literal question marks.
Fixes#26368.
When there was no space right after `/todo` but there was content on a
new line, the message would be rendered plainly, not as a todo widget.
This was because we split on only the space character to then check if
the first token was a valid widget.
Now we split on both spaces and newlines to extract the widget name,
irrespective of whether it is followed by a space or a newline. This
results in the message being rendered as a todo widget as expected.
Rename existing shortened references to demo organizations, like
`is_demo_org` or `demo-org-warning`, that have been used in the
codebase so far and replace them to be like the `models.py`
variable: `Realm.demo_organization_scheduled_deletion_date`.
This REDOS was not exploitable, as its content is only read from
checked-in files; regardless, simplify it to not backtrack. We also
do not actually have any location which use leading or trailing
whitespace, so remove those optional bits.
This function is used by almost all webhooks.
To support it, we use the "api_ignore_parameter" flag so that positional
arguments like topic and body that are not intended to be parsed from
the request can be ignored.
This demonstrates the use of BaseModel to replace a check_dict_only
validator.
We also add support to referring to $defs in the OpenAPI tests. In the
future, we can descend down each object instead of mapping them to dict
for more accurate checks.
We want to reject ambiguous type annotations that set ApiParamConfig
inside a Union. If a parameter is Optional and has a default of None, we
prefer Annotated[Optional[T], ...] over Optional[Annotated[T, ...]].
This implements a check that detects Optional[Annotated[T, ...]] and
raise an assertion error if ApiParamConfig is in the annotation. It also
checks if the type annotation contains any ApiParamConfig objects that
are ignored, which can happen if the Annotated type is nested inside
another type like List, Union, etc.
Note that because
param: Annotated[Optional[T], ...] = None
and
param: Optional[Annotated[Optional[T], ...]] = None
are equivalent in runtime prior to Python 3.11, there is no way for us
to distinguish the two. So we cannot detect that in runtime.
See also: https://github.com/python/cpython/issues/90353
The goal of typed_endpoint is to replicate most features supported by
has_request_variables, and to improve on top of it. There are some
unresolved issues that we don't plan to work on currently. For example,
typed_endpoint does not support ignored_parameters_supported for 400
responses, and it does not run validators on path-only arguments.
Unlike has_request_variables, typed_endpoint supports error handling by
processing validation errors from Pydantic.
Most features supported by has_request_variables are supported by
typed_endpoint in various ways.
To define a function, use a syntax like this with Annotated if there is
any metadata you want to associate with a parameter, do note that
parameters that are not keyword-only are ignored from the request:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: Annotated[int, ApiParamConfig(path_only=True)],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=NTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are also some shorthands for the commonly used annotated types,
which are encouraged when applicable for better readability and less
typing:
```
WebhookPayload = Annotated[Json[T], ApiParamConfig(argument_type_is_body=True)]
PathOnly = Annotated[T, ApiParamConfig(path_only=True)]
```
Then the view function above can be rewritten as:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: PathOnly[int],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=INTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are some intentional restrictions:
- A single parameter cannot have more than one ApiParamConfig
- Path-only parameters cannot have default values
- argument_type_is_body is incompatible with whence
- Arguments of name "request", "user_profile", "args", and "kwargs" and
etc. are ignored by typed_endpoint.
- positional-only arguments are not supported by typed_endpoint. Only
keyword-only parameters are expected to be parsed from the request.
- Pydantic's strict mode is always enabled, because we don't want to
coerce input parsed from JSON into other types unnecessarily.
- Using strict mode all the time also means that we should always use
Json[int] instead of int, because it is only possible for the request
to have data of type str, and a type annotation of int will always
reject such data.
typed_endpoint's handling of ignored_parameters_unsupported is mostly
identical to that of has_request_variables.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is important because the "guests" value isn't one that we'd
expect anyone to pick intentionally, and in particular isn't an
available option for the similar/adjacent "email invitations" setting.
This commit rename the existing setting `Who can invite users to this
organization` to `Who can send email invitations to new users` and
also renames all the variables related to this setting that do not
require a change to the API.
This was done for better code readability as a new setting
`Who can create invite links` will be added in future commits.
This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
This commit adds id_field_name field to GroupPermissionSetting
type which will be used to store the string formed by concatenation
of setting_name and `_id`.
This commit makes the database changes while creating internal_realm
to be done in a single transaction.
This is needed for deferring the foreign key constraints
to the end of transaction.
This commit removes the 'alert' field from the payload for
Android via GCM/FCM.
The alert strings generated do not get used at all and have
not been used since at least 2019. On Android, we construct
the notification UI ourselves in the client, and we ignore
the alert string.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
The 'startinline' option is utilized in the `CodeHilite` instantiation
to indicate that the provided PHP code snippet should be highlighted
even if it doesn't start with the opening tag or marker of the
associated programming language (which will rarely be the case in
Zulip, since one discusses a section of a file much more often than a
whole file).
Fixes: https://chat.zulip.org/#narrow/stream/137-feedback/topic/php.20syntax.20highlighting.20should.20not.20require.20.60.3C.3Fphp.60.
Signed-off-by: Akshat <akshat25iiit@gmail.com>
This commit updates the 'get_apns_alert_subtitle' function to
return a common subtitle, i.e., "{full_name} mentioned everyone:"
for wildcard mentions.
The triggers for the stream or topic wildcard mentions include:
* NotificationTriggers.TOPIC_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.STREAM_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.TOPIC_WILDCARD_MENTION
* NotificationTriggers.STREAM_WILDCARD_MENTION
We now send stream creation and stream deletion events on
changing a user's role because a user can gain or lose
access to some streams on changing their role.
This commit extracts the code which queries the required streams
to a new function "get_user_streams". The new functions returns
the list of "Stream" object and not dictionaries and then
do_get_streams function converts it into list of dictionaries.
This change is important because we would use the new function
in further commit where we want list of "Stream" objects and
not list of dictionaries.
There was a bug in apply_event code where only a stream which
is not private is added to the "never_subscribed" data after
a stream creation event. Instead, it should be added to the
"never_subscribed" data irrespective of permission policy of
the stream as we already send stream creation events only to
those users who can access the stream. Due to the current
bug, private streams were not being added to "never_subscribed"
data in apply_event for admins as well. This commit fixes it
and also makes sure the "never_subscribed" list is sorted
which was not done before and was also a bug.
The bugs mentioned above were unnoticed as the tests did not
cover these cases and this commit also adds tests for those
cases.
The "streams" field in "/register" response did not include web-public
streams for non-admin users but the data for those are eventually
included in the subscriptions data sent using "subscriptions",
"unsubscribed" and "never_subscribed" fields.
This commit adds code to include the web-public streams in "streams"
field as well as everyone can access those and will make the "streams"
data complete.
The name and docstring were just wrong, having a UserMessage row isn't
sufficient for having message access and is actually only relevant in a
private stream with private history. The function is only used in a
single place anyway, in bulk_access_messages.
The comment mentioning this function in handle_remove_push_notification
can be tweaked to just not mention any function specifically and just
say why we're not checking message access.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Fixes#11767.
Previously multi-character emoji sequences weren't matched in the
emoji regex, so we'd convert the characters to separate images,
breaking the intended display.
This change allows us to match the full emoji sequence, and
therefore show the correct image.
We have modified the code to directly fetch realm from Message
object instead of "sender" field and thus we no longer need to
fetch "sender__realm" using select_related.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
Currently, we are displaying the "Complete the organization profile"
banner immediately after the organization was created. It's important to
strongly encourage orgs to configure their profile, so we should delay
showing the banner if the profile has not been configured after 15 days.
Thus also allows the users to check out Zulip and see how it works before
configuring the organization settings.
Fixes: #24122.
The number of affected objects may be quite high, and they are
selected by `id IN (...)` query, and updated with a giant `CASE`.
This turns out to be quadratic, and can cause large queries to take
hours, in a state where they cannot be terminated, when PostgreSQL >11
tries to JIT the query.
Set a batch_size as a stopgap performance fix before moving to
`.update()` as a real fix.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
We want to phase out the use of get_display_recipient
for streams, and this is the last place that I
eliminate it. The next commit will eliminate the
dead code and make mypy types tighter.
This change will make push notifications slightly
slower in some situations, but we avoid all the
complexity of a cache, and this code tends to run
offline.
We could always make this code a bit more efficient
by being a little smarter about what data we fetch
up front. For example, get_apns_alert_title gets
called by a function that already has the stream
name. It's just a bit of a pain to refactor when
you have all the DM codepath mucked up with the
stream codepath.
There's no need for the complexity and extra round
trips to call get_display_recipient in a testing
context.
We also eliminate the unnecessary call to check_string.
This function is poorly named, but that's a sweep
for another day.
The get_display_recipient helper is a clumsy way to get
stream names, and it's not even representative of how
most of our code retrieves stream names.
The new helper also double-checks that the Stream
object has the correct recipient id.
We no longer have to reason about the 12 possible
ways of invoking get_narrow_url. We also avoid
double computation in a couple places.
Finally, we get stricter type checks by just inlining
the calls.
We don't need to call get_display_recipient for
non-stream messages.
I will rename display_recipient in the next commit;
if I were to combine the steps the diff would be too
hard to read.
This commit renames the keyword 'pm' to 'dm' in the
'pm_mention_email_disabled_user_ids' and
'pm_mention_push_disabled_user_ids' attributes of the
'RecipientInfoResult' dataclass.
'pm' and 'dm' are the acronyms for 'private message' and
'direct message' respectively.
It includes 'TODO/compatibility' code to support the old format
fields in the tornado queues during the Zulip server upgrades.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.