549dd8a4c4 changed the regex that we build to contain whitespace for
readability, and strip that back out before returning it.
Unfortunately, this also serves to strip out whitespace in the source
linkifier, causing it to not match expected strings.
Revert 549dd8a4c4.
Fixes: #27854.
Adds details about the requested organization URL and type to the
registration confirmation email that's sent when creating a new
Zulip organization.
Fixes#25899.
If the request's `Accept:` header signals a preference for serving
images over text, return an image representing the 404/403 instead of
serving a `text/html` response.
Fixes: #23739.
Previously, we weren't able to mute the cross realm bots. This was
because, for muting the users, we access only those profiles which are
in realm, excluding the cross realm system bots.
This is fixed by replacing the access_user_by_id method with a new
method access_user_by_id_including_cross_realm for this specific test.
Fixes#27823
Earlier, for the push notifications having latex math
like "$$1 \oplus 0 = 1$$, the notification had the math
included multiple times.
This commit fixes the incorrect behavior by replacing
the KaTeX with the raw LaTeX source.
Fixes part of #25289.
This commit refactors the current hotspot subsytem to use a more
robust dataclass `Hotspot` defined in `lib/hotspots.py`. This fixes
mypy errors as well as make code more readable.
This commit introduces non-intro hotspots.
They are a bit different than intro hotspots in the
following ways:
* All the non-intro hotspots are sent at once instead of
sending them one by one like intro hotspots.
* They only activate when a specific event occurs,
unlike intro hotspot where they activate after the
previous hotspot is read.
Now, the topic wildcard mention follows the following
rules:
* If the topic has less than 15 participants , anyone
can use @ topic mentions.
* For more than 15, the org setting 'wildcard_mention_policy'
determines who can use @ topic mentions.
Earlier, topic wildcard mentions followed the same restriction
as stream wildcard mentions, which was incorrect.
Fixes part of #27700.
This commit updates the backend code to allow changing
can_access_all_users_group setting in development environment
and also adds a dropdown in webapp UI which is only shown in
development environment.
This commit moves a major portion of the 'update_plan`
view to a new shared 'BillingSession.do_update_plan' method.
This refactoring will help in minimizing duplicate code
while supporting both realm and remote_server customers.
This makes it possible for a self-hosted realm administrator to
directly access a logged-page on the push notifications bouncer
service, enabling billing, support contacts, and other administrator
for enterprise customers to be managed without manual setup.
has_billing_access already has the is_realm_owner check:
@property
def has_billing_access(self) -> bool:
return self.is_realm_owner or self.is_billing_admin
We previously did not allow setting signup_notifications_stream and
notifications_stream settings to private streams that admin is not
subscribed to, even when admins have access to metadata of all the
streams in the realm and can see them in the dropdown options as well.
This commit fixes it to allow admins to set these settings to private
streams that the admin is not subscribed to.
Previously, the notifications had "commented" as the action word
for every event.
As part of these changes, we extract a shared comment action function
in GitHub Integration that's used for both issue and discussion
comment events.
Instead of adding the assignee to the end of the message body,
we update the message body where the verb is so that the link
formatting at the end of the message is not broken, for example:
"user_a assigned user_b to [issue #XXX title text is here](link)."
This matches the formatting for the issue assigned message body.
Instead of adding the assignee to the end of the message body,
we update the message body where the verb is so that the link
formatting at the end of the message is not broken, for example:
"user_a assigned user_b to [issue #XXX title text is here](link)."
Also updates the issue title in the test fixture so that it tests
that only the first instance of "assigned" or "unassigned" in the
issue title is updated for the assignee text.
Also adds punctuation to the issue title in the test fixture to
test the expected behavior for titles that end in a value from
`string.punctuation`.
We did not remove the objects for deactivated streams from
subscriptions field in apply_event. We need to do this because
we do not send "subscription/remove" events to subscribers
when deactivating streams.
Guests might lose access to deactivated users if the user
is not involved in any DM with guest. This commit adds
code to send "realm_user/remove" events for such cases.
We now send user creation events to recipient users
when sending DMs if recipients gain access to either
sender or other pariticpating users in the DM.
This commit adds code to send "realm_user/remove" event
when a guest user loses access to a user due to the user
being unsubscribed from one or more streams.
This commit adds code to send user creation events to
guests who gain access to new subscribers and to the
new guest subscribers who gain access to existing
stream subscribers.
The presence and user status update events are only sent to accessible
users, i.e. guests do not receive presence and user status updates for
users they cannot access.
This commit adds code to make sure that update events for changing
a user's role, email, etc. are not sent to guests who cannot access
the modified user.
We do not send the original user data in user creation events
to guests if user access is restricted in realm, as they would
receive the information about user if user is subscribed to some
common streams after account creation.
This commit adds code to update access_user_by_id to raise
error if guest tries to access an inaccessible user.
One notable behavioral change due to this is that we do
not allow guest to mute or unmute a deactivated user if
that user was not involved in DMs.
Pull request comment alerts were previously sent to a topic for an issue,
which resulted in two different topics for the same PR.
Fixes: #26086.
Co-authored-by: Lauryn Menard <lauryn@zulip.com>
Updated the repo name and pull request number/title for the new
pull request commit fixture to be the same as the one used for the
other pull request test fixtures (e.g. pull_request__opened) so
that the TOPIC_PR can be used in the subsequent updates.
Co-authored-by: Lauryn Menard <lauryn@zulip.com>
This may happen if there are multiple servers with the same UUID
submitting data (e.g. if they were cloned after initial creation), or
if there is one server, but `./manage.py clear_analytics_tables` was
used to truncate the analytics tables.
In the case of `clear_analytics_tables`, the data submitted likely has
identical historical values with new remote `id` values; preserving
the originally-submitted contemporaneous data is the best option. For
the case of submissions from multiple servers, there is no completely
sensible outcome, so the best we can do is detect the case and move
on.
Since we have a lock on the RemoteZulipServer, we know that no other
inserts are happening, so counting before and after will return the
true number of rows inserted (which `bulk_create` cannot do in the
face of `ignore_conflicts`[^1]). We compare this to the expected
number of new inserted rows to detect dropped duplicates.
[^1]: See https://code.djangoproject.com/ticket/30138.
This reduces the giant load spike at 5 minute past the hour, when all
remote servers currently attempt to submit their records.
We do not wish to slew over a full hour, because we want to ensure
that we do not hold the lock when the next hour's analytics runs. It
is also not necessary to have that much variation; 10 minutes is
picked as an arbitrary "long enough" time to spread requests over.
Earlier, for the emails having latex math like
"$$d^* = +\infty$$", the bad rendering led to the math
being included multiple times in the email body.
This was due to displaying KaTeX HTML without the CSS.
This commit fixes the incorrect behavior by replacing
the KaTeX with the raw LaTex source.
Fixes part of #25289.
This is a useful helper using the same API as
send_analytics_to_push_bouncer(), but uploading only realms info. This
is useful to upload realms info without the risk of taking a long time
to process the request due to too much of the *Count analytics data.
The original behavior of this setting was to disable LDAP
authentication for any realms not configured to use it. This was an
arbitrary choice, and its only value was to potentially help catch
typos for users who are lazy about testing their configuration.
Since it makes it a very inconvenient to potentially host multiple
organizations with different LDAP configurations, remove that
behavior.
This commit adds a new option 'DMs, mentions, and followed topics'
to 'desktop_icon_count_display' setting.
The total unread count of DMs, mentions, and followed topics appears
in desktop sidebar and browser tab when this option is configured.
Some existing options are relabeled and renumbered. We finally have:
* All unread messages
* DMs, mentions, and followed topics
* DMs and mentions
* None
Fixes#27503.
While the server implementation has accepted this value for a few
months as part of building the feature, following topics was not a
fully supported feature of the Zulip server before
3f2ab44f94, just before feature
level 219.
So that's probably the correct level to document as the first feature
level at which we recommend that clients supporting the followed
topics feature process the value.
These new models are incomplete and totally untested, but merging this
will provide valuable scaffolding for doing smaller PRs working on
individual gaps, and reveals a clear set of TODOs/refactoring/model
changes needed to support where want to end up.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
These were written before the draft endpoints were converted to use
@typed_endpoint and pydantic-based DraftData(BaseModel) for param
validation. Update them to avoid the confusion of talking about dicts
and dict_validator functions when those are no longer a thing.
This reverts commit 091e2f177b.
This version of python_to_js_linkifier fails for at least some real
linkifiers. We'll likely re-introduce this after a bit more debugging.
This makes it possible to send notifications to more than one app ID
from the same server: for example, the main Zulip mobile app and the
new Flutter-based app, which has a separate app ID for use through its
beta period so that it can be installed alongside the existing app.
This commit adds code to send stream deletion events when
unsubscribing non-admin users from private streams and
when unsubscribing guests from public streams since
non-admins cannot access unsubscribed private streams
and guests cannot access unsubscribed public streams.
It was discovered by the Zulip development team that active users who
had previously been subscribed to a stream incorrectly continued being
able to use the Zulip API to access metadata for that stream. As a
result, users who had been removed from a stream, but still had an
account in the organization, could still view metadata for that
stream (including the stream name, description, settings, and an email
address used to send emails into the stream via the incoming email
integration). This potentially allowed users to see changes to a
stream’s metadata after they had lost access to the stream.
This bug was present in all Zulip releases prior to today's Zulip
Server 7.5.
This commit adds new API endpoint to get stream email which is
used by the web-app as well to get the email when a user tries
to open the stream email modal.
The stream email is returned only to the users who have access
to it. Specifically for private streams only subscribed users
have access to its email. And for public streams, all non-guest
users and only subscribed guests have access to its email.
All users can access email of web-public streams.
This commit removes "email_address" field from Subscription objects
and we would instead a new endpoint in next commit to get email
address for stream with proper access check.
This change also fixes the bug where we would include email address
for the unsubscribed private stream as well when user did not have
permission to send message to the stream, and having email allowed
the unsubscribed user to send message to the stream.
Note that the unsubscribed user can still send message to the stream
if the user had noted down the email before being unsubscribed
and the stream token is not changed after unsubscribing the user.
Since the server-side implementation no longer uses look-ahead
or (more importantly) look-behind, it is possible to exactly implement
in Javascript. This removes a common class which would prevent local
echo.
This requires reworking the topic linking algorithm, to march the
server's as well. The tests and behaviour are adjusted in so doing --
previously, the JS implementation would have linked `#foo` with a
`foo` regex on the linkifier, but the server implementation would not
have.
This commit adds code to unset is_web_public and is_realm_public fields
on attachments when deactivating a stream as we do not want to allow
spectators to access them after the stream is deactivated.
This commit also adds a comment explaining why we don't use
do_change_stream_permission to set the privacy fields on deactivating
a stream.
Fixes#27634.
We did not unset is_realm_public field on attachements when unarchiving
streams, but we do unset is_web_public field. This commit adds code to
unset the is_realm_public field as well as we make the stream private
while unarchiving it.
This cache was only used in one place, which is infrequently
called (only when sending messages, or searching explicitly for a list
of users) and the overhead of maintaining the cache is not worth
trying to avoid the well-indexed lookup of the huddle.
We now pass bogus data for inaccessible users when sending
the users data in "realm_users" field of "register" response
or when using endpoints like "GET /users" to get data of
all the users in realm.
We would add a client capability field in future commits
such that new clients would receive data only for accessible
users and they can form the bogus data by themselves.
This commit adds new setting for controlling who can access
all users in the realm which would have "Everyone" and
"Members only" option.
Fixes part of #10970.
This is a CountStat for tracking how many mobile notifications the
server requested.
1. On a self-hosted server, that means requesting from the push bouncer.
2. On a server that's its own push bouncer, that's just the number
directly sent.
This number has room for inaccuracy due to incrementing by the number of
user devices on a self-hosted server, as it doesn't account for errors
that may occur in the GCM/APNs low-level sending codepaths on the bouncer.
Also tests that a server that's its own push bouncer correctly
increments its mobile_pushes_sent::day CountStat, by basing it on the
values returned from the send_apple/android_push_notification functions
which tell us the actual number of successfully sent notifications.
Since the return values of send_..._push_notification are now
used in those codepaths, we need to tweak our mocks in some unrelated
tests to set up some return value to avoid errors.
Rename the existing 'wildcard_mentioned' flag to
'stream_wildcard_mentioned'.
The 'wildcard_mentioned' flag is deprecated and exists for
backwards compatibility.
We have two separate flags for stream and topic wildcard mentions,
i.e., 'stream_wildcard_mentioned' and 'topic_wildcard_mentioned',
respectively.
* stream wildcard mentions: `@all`, `@everyone`, and `@stream`
* topic wildcard mentions: `@topic`
The `wildcard_mentioned` flag is included in the events and
API response if either `stream_wildcard_mentioned` or
`topic_wildcard_mentioned` is set.
In c37871ac3a, we renamed the
two unused and historical bits of the 'flags' bitfield of
the 'UserMessage' table:
* 'summarize_in_home' to 'topic_wildcard_mentioned'
* 'summarize_in_stream' to 'group_mentioned'
This commit clears out the old data for those bits.
Additionally, we are clearing 'force_expand' and 'force_collapse'
unused flags to save future work.
Add the new model for recording basic information about Realms on remote
server, to go with the other analytics data. Also adds necessary changes
to the bouncer endpoint and the send_analytics_to_push_bouncer()
function to submit such Realm information.
Previously, when a deactivated user was mentioned, he wasn't
rendered as a Pill. This is because the dataset for validating mentions
only included active users, which is fixed by removing that filter.
To allow only silent mentions of them, an extra is_active property
added to FullNameInfo class, which is populated from the query,
which tells if user is deactivated. This is used to convert any
mentions of them to silent mentions in the backend markdown.
Fixes#26857
This commit updates format_user_row to return a TypedDict.
This commit is a prep commit for feature of restricting user
access such that code can be easy to read and understand when
we add that feature.
This commit updates user_profile_to_user_row to return a TypedDict
and also updates the return type of get_realm_user_dicts to be a
TypedDict.
This commit is a prep commit for feature of restricting user
access such that code can be easy to read and understand when
we add that feature.
This is a prep commit for adding feature of restricting
user access to guests such that we can keep the code
easy to read and understand when that feature is added.
We'll need this information in order to properly direct APNs
notifications. Happily, the Zulip server always sends it when
registering an APNs token; and it appears it always has done so
since the commit:
cddee49e7 Add support infrastructure for push notification bouncer service.
back in 2016. So there's no compatibility issue from requiring it.
This missing `REQ` call has meant we just drop this parameter:
even though the remote Zulip server passes it (for all APNs tokens),
we never notice and never store it. Fix that.
We're going to need to use this information, so we shouldn't just
assume a value; the client should tell us the actual value.
Conveniently, the Zulip mobile app does already pass this parameter
and has since forever. So we can just start requiring it, with no
compatibility constraint.
We already always pass this parameter from the mobile client,
so this makes the tests more realistic already. And we'll shortly
be making this parameter required.
Updates the Slack integration page to not describe adding a stream
or topic parameter to the URL query since that's not supported by
the current integration implementation.
Updates the Slack-compatible webhook integration page to have the
extra notes about the integration at the top of the page. Also,
removes the reference to a screenshot of the webhook since there
isn't one.
Earlier, email message notifications included prior messages sent
to the same topic for context. This is more confusing than helpful
for messages that the user is likely to have received notifications
for all the prior messages in the conversation already (or read them
in the Zulip UI).
Now, we include prior context only when the user is mentioned via
personal, group, stream or topic wildcard mention.
Fixes#27479.
This commit improves the test to explicitly verify that multiple
messages that were sent in quick succession to a topic are included
in the email body when we have email notifications enabled for a
given stream.
Earlier, the test was only verifying the email subject and the fact
that only one email was sent.
It is important to verify the fact that all the messages sent to a
topic in quick succession should be included in the email body.
The event for stream typing notifications is no longer sent
to the long_term_idle subscribers of the stream.
This helps to reduce the tornado's work of parsing super-long
JSON-encoded lists of user IDs in large streams. Now the lists
are shorter.
This will be used in gear menu to inform admin of their
sponsorship application status.
This includes some additional tweaks for when to show
billing and plans to users.
- Replaces the "Via Markdown" tab with "Via drag-and-drop", and
modifies the instructions to explain that you can drag and drop
anywhere in the app, whether or not the compose box is open.
- Adds "Via paste" tab for the copy-pasting instructions.
Fixes#26894.
There order of group ids doesn't matter here and thus the
compared values can have the ids in different order and test
should still pass. So, using `set` for comparing unordered
lists seems like the right fix here.
Previously, cross realm bots were not displayed as mention Pills.
This is because, the data set for validating mentions considers
only the realm id which is None in case of cross realm bots.
Hence, adding an or Q object to it, to also check if
the email is a part of the cross realm bots email, in case the
realm id returns None.
Fixes#26913
Earlier, the 'wildcard_mentioned' flag was set for both the
stream and topic wildcard mentions.
Now, the 'topic_wildcard_mentioned' flag is set for topic
wildcard mentions, and the 'wildcard_mentioned' flag is set for
stream wildcard mentions.
We will rename the 'wildcard_mentioned' flag to
'stream_wildcard_mentioned' in a later commit.
This commit renames the two unused and historical bits of the
'fields' bitfield of the 'UserMessage' and 'ArchivedUserMessage'
tables.
* 'summarize_in_home' to 'topic_wildcard_mentioned'
* 'summarize_in_stream' to 'group_mentioned'
The 'group_mentioned' flag doesn't affect the feature,
but completing the work here helps to save future migration
and indexing efforts on the UserMessage table, as we plan to
use this flag in the future for group mentions.
The unused bits may have old data; we'll clear that in
a separate commit.
It creates the 'zerver_usermessage_any_mentioned_message_id'
index concurrently.
We now send "realm_user/update" (and "realm_bot/update" for bots)
events with "is_active" field when deactivating and reactivating
users, including bots.
We would want to use "remove" event for a user losing access
to another user for #10970, so it is better to use "update"
event for deactivation as we only update "is_active" field
in the user objects and the clients still have the data for
deactivated users.
Previously, we used to send "add" event for reactivation along
with complete user objects, but clients should have the data
for deactivated users as well, so an "update" event is enough
like we do when deactivating users.
38f2a2f475 updated the comment but not the code. Using
`self.client.post` instead of `self.client_post` means that we do not
set the host headers correctly.
This commit adds code to pass configuration objects for group
permission settings in register response to clients such that
we do need to duplicate that data in clients and can avoid
future bugs due to inconsistency.
The "server_supported_permission_settings" field is included
in the response if "realm" is present in "fetch_event_types",
as this is what we do for other server-related fields.
This commit moves constants for system group names to a new
"SystemGroups" class so that we can use these group names
in multiple classes in models.py without worrying about the
order of defining them.
We now pass the complete configuration object for a setting to
access_user_group_for_setting instead of passing the configuration
object's fields as different variables.
This commit renames permissions_configuration variable to
permission_configuration since the object contains config for
a single permission setting and thus permission_configuration
seems like a better name.
Updates the description of update_message_flags op: add event for
details about actions that send/trigger the event and other details
that are useful for client implementations.
Also, links to the above updates in the op: remove variant of the
update_message_flags event.
Previous behavior-
- Guest did not receive stream creation events for new
web-public streams.
- Guest did not receive peer_add and peer_remove events
for web-public and subscribed public streams.
This commit fixes the behavior to be -
- Guests now receive stream creation events for new
web-public streams.
- Guest now receive peer_add and peer_remove events for
web-public and subscribed public streams.
This commit updates code in bulk_remove_subscriptions and
bulk_add_subscriptions to return early if there are no
subscribers to remove or add to the streams.
This change helps us in avoiding unnecessary queries like the
one used to get subscribers list of streams, which is then used
to send events but we would not send any events if no subscribers
are added or removed and some more similar queries.
We use `Realm.default_language` value, which is set by selecting
the 'Organization language', to internationalize the introductory
messages of the initial streams.
Fixes#25729.
In this commit, we add a new dropdown 'Organization language' on
the `/new` and `/realm/register` pages. This dropdown allows setting
the language of the organization during its creation. This allows
messages from Welcome Bot and introductory messages in streams to be
internationalized.
Fixes a part of #25729.
This commit renames default_view and escape_navigates_to_default_view
settings to web_home_view and web_escape_navigates_to_home_view in
database and API to match with our recent renaming of user facing
strings related to this.
We also rename the variables, functions, comments in code and class
names and IDs for elements related to this.
Adds a new onboarding email `onboarding_team_to_zulip` for the user
who created the new Zulip organization.
Co-authored by: Alya Abbott <alya@zulip.com>
Previously, we had checked that deprecated parameters and return
values had been marked as `deprecated: true` in the OpenAPI
documentation and had a description with a deprecated note.
Here we extend that check at the top level to deprecated endpoints.
The backend test that catches a failed assertion for this check
is `test_api_doc_endpoints` in zerver/tests/test_docs.py as that
test checks for a success response all pages linked the sidebar of
the API documentation.
The comment has drifted away from where it should be placed within the
code and also talks about RealmCounts specifically, while we have other
object types that this equally applies to.
The former name is kind of misleading - this function is for the remote
server to send analytics to the push bouncer. Under our usual
terminology, a "remote server" is a self-hosted Zulip server. So data is
sent FROM not TO a remote server.
This commit updates the API docs for the optional parameter
'automatic_new_visibility_policy' in the `POST /messages` response.
Changes include:
* Explicitly mention that the new visibility policy is still
sent as `user_topic` event.
* Adds a link as a way to understand the meaning of the enum
values '2' and '3'.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Now that we're enabling the feature in the UI, we should set
these to the planned long-term defaults for these settings.
Also, this commit cleans up the '0476' and '0477' migration
files related to user_topic policies.
'0476' sets 'null=True'
'0477' is noop
'0482' sets the default values and performs backfilling.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Documents the procedure to subscribe / unsubscribe a user via their
profile and general stream settings.
Both methods are separated into tabs in the documentation.
Fixes#26902
Originally, this was how the notification emails worked, but that was changed
in 797a7ef97b, with this old behavior
available as an option.
The footer and from address of emails that are sent when this
setting is set to True are confusing, especially when more people
are involved in a stream and since we have changed the way we send
emails, it should be removed. It’s also not widely used.
Fixes#26609.
This commit renames "default" views to "home" views in the setting
labels, keyboard shortcuts list, help documentation and its urls.
This commit does not do changes in variable and class names, setting
field in database, API docs and changelog.
Fixes part of #27251.
Thisi and the following commit follow the approach used in
3e2ad84bbe.
First migration requires a server restart - after that any new realms
will be created with the columns set.
The following migrations are in the next commit:
Second migration does a backfill for older realms and can run in the
background while the server is operating normally.
Third migration enforces null=False now that all realms have the columns
set.
Add an optional `automatic_new_visibility_policy` enum field
in the success response to indicate the new visibility policy
value due to the `automatically_follow_topics_policy` and
`automatically_unmute_topics_in_muted_streams_policy` user settings
during the send message action.
Only present if there is a change in the visibility policy.
Python evaluates function parameter defaults at definition time, not
call time. This function wouldn’t work with other realms anyway.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
- Adds desktop/web instructions.
- Adds #inbox relative link for logged-in users.
- Moves Inbox up in the left sidebar just under "Reading strategies".
- Moves Inbox article content to Markdown include.
- Adds "From the Inbox view" section to "Finding a topic to read",
"Getting started with Zulip", and "Reading strategies".
- Documents Inbox as a new option for the default web app view.
- Removes unused Markdown link.
- Tweaks subheading to better match help center patterns.
- Add Inbox option in "Configure default settings for new users".
- Adds new tabbed section and instructions for marking messages as
read and reading topics via the Inbox view.
Fixes#26903.
Co-authored-by: Alya Abbott <alya@zulip.com>
This commit adds support to allow bot-owners to delete messages
sent by their bots if they are allowed to delete their own messages
as per "delete_own_message_policy" setting and the message delete
time limit has not passed.
`stack_info` shows the stack between where the error was raised and
where it was captured -- which is not interesting when we
intentionally raised it, and know where it will be captured.
Omit the `stack_info` when it will just fill the logs with
uninteresting data.
It is not clear why 84723654c8 added these lines, are they are not
related to status codes.
Using an explicit `capture_exception` causes the exception in Sentry
to not have a `logger` field, which is quite useful for filtering.
Transifex's webhook documentation[^1] describes an `event` parameter
which is used to distinguish which event type was received. Dispatch
based on that, and pass that value to UnsupportedWebhookEventTypeError
if need be.
[^1]: https://developers.transifex.com/docs/webhooks
This commit adds a 'stream_id' parameter to the 'POST /typing'
endpoint.
Now, 'to' is used only for "direct" type. In the case of
"stream" type, 'stream_id' and 'topic' are used.
Earlier, 'MissedMessageHookTest' didn't have 'super().setUp()'
and 'super().tearDown()' in the overrided methods 'setUp' and
'tearDown', respectively, that resulted in cached objects being
used between tests and hence flaky test failures.
This commit adds 'super().setUp()' and 'super().tearDown()'.
Change the url in the notification message to point to the settings
interface rather than linking to the export directly.
This is a much better user experience in the case that the export has
been deleted since the time the export was requested.
Fixes: #26923.
232eb8b7cf changed how these pages work, to render inline instead of
serving from a URL, but did not update the SMTP use case; this made
SMTP failures redirect to a 404.
Two registration requests for the same email address can race,
leading to an IntegrityError when making the second user.
Catch this and redirect them to the login page for their existing
email.
This works around the `/usr/bin/pg_dump` failure described in the
previous commit. Since we are now calling the appropriately-versioned
`pg_dump` binary directly, it is no longer "necessary", but is added
as a defense-in-depth.
`/usr/bin/pg_dump` on Ubuntu and Debian is actually a tool which
attempts to choose which `pg_dump` binary from all of the
`postgresql-client-*` packages that are installed to run. However,
its logic is confused by passing empty `--host` and `--port` options
-- instead of looking at the running server instance on the server, it
instead assumes some remote host and chooses the highest versioned
`pg_dump` which is installed.
Because Zulip writes binary database backups, they are sensitive to
the version of the client `pg_dump` binary is used -- and the output
may not be backwards compatible. Using a PostgreSQL 16 `pg_dump`
writes archive format 1.15, which cannot be read by a PostgreSQL 15
`pg_restore`.
Zulip does not currently support PostgreSQL 16 as a server. This
means that backups on servers with `postgresql-client-16` installed
did not successfully round-trip Zulip backups -- their backups are
written using PostgreSQL 16's client, and the `pg_restore` chosen on
restore was correctly chosen as the one whose version matched the
server (PostgreSQL 15 or below), and thus did not understand the new
archive format.
Existing `./manage.py backups` taken since `postgresql-client-16` were
installed are thus not directly usable by the `restore-backup` script.
They are not useless, however, since they can theoretically be
converted into a format readable by PostgreSQL 15 -- by importing into
a PostgreSQL 16 instance, and re-dumping with a PostgreSQL 15
`pg_dump`.
Fix this issue by hard-coding path to the binary whose version matches
the version of the server we are connected to. This may theoretically
fail if we are connected to a remote PostgreSQL instance and we do not
have a `postgresql-client` package locally installed which matches the
remote PostgreSQL server's version. However, choosing a matching
version is the only way to ensure that it will be able to be imported
cleanly -- and it is preferable that we fail the backup process rather
than write backups that we cannot easily restore from.
Fixes: #27160.
The goal is to reduce load on Sentry if the service is timing out, and
to reduce uwsgi load from long requests. This circuit-breaker is
per-Django-process, so may require more than 2 failures overall before
it trips, and may also "partially" trip for some (but not all)
workers. Since all of this is best-effort, this is fine.
Because this is only for load reduction, we only circuit-breaker on
timeouts, and not unexpected HTTP response codes or the like.
See also #26229, which would move all browser-submitted Sentry
reporting into a single process, which would allow circuit-breaking to
be more effective.
This prevents failure to submit a client-side Sentry trace from
turning into a server-side client trace. If Sentry is down, we merely
log the error to our error logs and carry on.
When the `type` of the message being composed is "stream",
this commit updates the `to` parameter to accept the ID of
the stream in which the message is being typed.
Earlier, it accepted a single-element list containing the ID
of the stream.
Sending the element instead of a list containing the single element
makes more sense.
This is a prep commit that extracts the following two methods
from '/actions/scheduled_messages' to reuse in the next commit.
* extract_stream_id
* extract_direct_message_recipient_ids
The 'to' parameter for 'POST /typing' will follow the same pattern
in the next commit as we currently have for the 'to' parameter in
'POST /scheduled_messages', so we can reuse these functions.
This commit removes the compatibility support for "private"
being a valid value for the 'type' parameter in 'POST /typing'.
"direct" and "stream" are the only valid values.
This commit replaces the value `private` with `direct` in the
`message_type` field for the `typing` events sent when a user
starts or stops typing a message.
This commit includes the message's sender id in the
'topic_participant_user_ids' set.
The 'participants_for_topic' function doesn't include the sender_id,
if the user is sending their first message in the topic, because
'participants_for_topic' queries the 'Message' table, but the message
is actually sent at a later stage in the codepath, resulting in
missing the sender_id in this case.
This is needed to set the 'wildcard_mentioned' flag for the sender's
user message in the case of topic wildcard mentions.
This doesn't lead to sending email and push notifications to the
sender because we have a check to skip notifications if the user
to receive notifications is the sender itself.
This should have been included in c0c30bc.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
Earlier, when we used 'self.send_message()' in the backend tests,
the sent message was not marked as read for the sender.
Reason: To set the read flag, we have to check if
'message.sent_by_human()'. It returns False because the
'sending_client' for tests is "test suite" and the 'sent_by_human'
function doesn't enlist the "test suite" client name as a human client.
This commit adds "test suite" to that list.
Also fixes a bug in when apply_unread_message_event was called that
was revealed by this change.
Instead of having "business" as the default organization type
for demo organizations in the dev environment, we set it to
"unspecified". This way a more generic zulip guide email will
be sent as part of the onboarding process for users invited
to try out the demo organization if the owner has not yet
updated the organization type.
Updates the testing for draft event schemas to be fully checked by
`zerver/tests/test_events.py` and `tools/check-schema`.
Also, corrects the type for the timestamp field in Draft objects
in the OpenAPI documentation.
Updates the testing for scheduled message event schemas to be fully
checked by `zerver/tests/test_events.py` and `tools/check-schema`.
Adds the missing 'failed' field to the scheduled message events
in `web/tests/lib/events.js` as well.
We add `Content-Disposition: inline` header to commonly supported
video MIME types so that when we `Open` them in lightbox, they
play in new tab.
This will require a follow-up database migration to apply to
previously uploaded videos.
This excludes the legacy webhook from the
"realm_incoming_webhook_bots" object as those do not have the same URL
format as modern webhook integrations.
This change adds support for importing guest users from a Mattermost
export file into Zulip. The function now checks the user's teams and
roles to determine whether the user is a guest on the team, and sets
the user's role accordingly. This ensures that the imported user data
includes the correct role for each user.
Fixes#23720.
This fixes a regression introduced in
9954db4b59, where the realm's default
language would be ignored for users created via API/LDAP/SAML,
resulting in all such users having English as their default language.
The API/LDAP/SAML account creation code paths don't have a request,
and thus cannot pull default language from the user's browser.
We have the `realm.default_language` field intended for this use case,
but it was not being passed through the system.
Rather than pass `realm.default_language` through from each caller, we
make the low-level user creation code set this field, as that seems
more robust to the creation of future callers.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
None of these tests seem to want to have tick=True, which is the
default. Letting the clock tick without a reason introduces the
possibility of nondeterministic test failures depending on the execution
time.
This reverts b8581e2895. The mobile
client on Android parses this field using:
```kotlin
timeMs = data.require("time").parseLong("time") * 1000
```
This throws an error if value is not `long` (i.e. an integer),
resulting in dropped notifications on Android from servers which had
deployed b8581e2895.
Switch back to sending an integer, but keep the behaviour from
fd6091ad17 where we send the timestamp in the payload of both
Android and Apple push notifications.
Rather than fetch all UserMessage rows for all streams, and subtract
those out in Python-space from the list of all Message rows the user
may have received -- do this via a "NOT EXISTS" subquery. This is
much better indexed (performing in fractions of milliseconds rather
than hundreds), and also consumes much less memory.
Adds support for bulk-adjusting a single user's membership in multiple
user groups in a single transaction in the low-level actions
functions, for future use by work on #9957.
In commit 3e369bcf9, the `code` field for api/deactivate-own-user
was incorrectly documented as "BAD_REQUEST", which is the code for
the similar error returned by api/deactivate-user.
Corrects the error code to be "CANNOT_DEACTIVATE_LAST_USER" and
adds documentation for the two other fields returned by this
error response.
Note that the descriptions for the fields are added in the error
response schema will not be rendered in our current documentation.
They are rendered in other third-party tools and are therefore
good to have in our OpenAPI documentation. The description that
will be rendered in our documentation is the general error response
schema description and that is also updated for details about the
extra fields in this error response.
This kind of payload that's loaded from json in the body of the request
is not only used for webhooks, but also in the push bouncer, and may get
used elsewhere too - so a general name is better.
Earlier, 'is_row_muted' returned 'true' if the message was in
a muted stream or muted topic.
If the message is in an unmuted or followed topic in a muted
stream, such topics should be treated as not muted topics
in an unmuted stream.
This commit fixes the incorrect behavior.
Now, for wildcard mentions, 'unread_msgs.mentions' exclude
the IDs in muted streams only if the message is in default or
muted topic.
Also, 'unread_msgs.count' takes into account the unreads in unmuted
or followed topics in muted streams too.
Documents that this bug was fixed in the API changelog.
Update 'get_muted_stream_ids' to return a set of IDs
instead of a list.
This will help to avoid linear time search operations later
while using 'if stream_id in muted_streams_ids'.
This prep commit renames the 'build_topic_mute_checker' function
to 'build_get_topic_visibility_policy' and updates it to support
all the visibility policies.
The function prefetches the visibility policies the user has
configured for various topics and prepares a dict named
'topic_to_visibility_policy' to be used later on.
A comment was added in f797604 to convey that the unread count
at that time doesn't exclude the unreads in muted topics.
848c080 added the support to exclude the muted topic;
however, the comment was not updated.
This commit updates the comment to reflect the current behavior.
This is an exception that we should be generally catching like the
others, which will give our standard /login/ redirect and proper logging
- as opposed to a 500 if we don't catch.
Addresses directly a bug we occurred in the wild, where a SAMLResponse
was submitted without issuers specified in a valid way, causing this
exception. The added test tests this specific type of scenario.
These queries benefit from the increased specificity of using the
realm / recipient / sender indexes. The argument from 11a1cb9630
does not apply in these cases, since there are only 2 usermessage rows
for each matching message row for DMs, and few more than that for
huddles.
This query has two halves; messages set by the user, and messages
received by the user. The former uses the already-specific
usermessage privatemessage flag index; the latter relies on the
recipient index on messages.
Add the realm_id to the latter half, so that the recipient_id is
paired with the realm_id.
This commit updates the text for a dropdown option `Unmuted streams`
to `Unmuted streams and topics` for `Show unread counts for` user
preference settings for better clarity.
Clarifies that the `all` field in the `op: "add"` event is only
relevant for the `"read"` message flag, and that it will be false
for all other specified flags in theses events.
Deprecates the `all` field in the `op: "remove"` event and document
that it is false for all specified flags.
Updates the deprecated `operation` field description and makes
a few other small revisions to the event text for clarity and
accuracy.
This commit adds a `jitsi_server_url` field to the Realm model, which
will be used to save the URL of the custom Jitsi Meet server. In
the database, `None` will encode the server-level default. We can't
readily use `None` in the API, as it could be confused with "field not
sent". Therefore, we will use the string "default" for this purpose.
We have also introduced `server_jitsi_server_url` in the `/register`
API. This will be used to display the server's default Jitsi server
URL in the settings UI.
The existing `jitsi_server_url` will now be calculated as
`realm_jitsi_server_url || server_jitsi_server_url`.
Fixes a part of #17914.
Co-authored-by: Gaurav Pandey <gauravguitarrocks@gmail.com>
The unique index on `(user_id, message_id)` that is the
`zerver_usermessage` table is rather specific, and even the PostgreSQL
extended statistics are not enough for it to realize there is a
correlation between the `realm_id` in the message table and the
`user_id` in the usermessage table. This means that adding the
`realm_id` limit when there is a join to `zerver_usermessage` flips
the query plan from a nested loop of unique usermessage index-only
scan, with an index scan of the messages pkey -- to a parallel hash
join of the messages limit with a index scan of just the user_id limit
on usermessages. It thinks this is necessary because it thinks that
the `realm_id` limit may remove a large number of messages from the
usermessage set -- which is totally untrue.
Remove the `realm_id` limit if we have a usermessage join.
Removes the JsonErrorBase and JsonError schemas as all error
responses in the API docs use the CodedErrorBase or CodedError
schemas.
Removes the AddSubscriptionsResponse schema since it's no longer
incorrectly used as a shared schema for error responses, and
instead documents the specific success response properties in the
endpoint.
Adds an InvalidStreamError schema for errors that return a 'msg'
field with the string: "Invalid stream ID". Updates endpoints that
have this error 'str' documented to use the shared schema.
Updates documentation of ResourceNotFoundErrors for unknown draft
and scheduled message IDs to include the 'code' field, have an
HTTP status code of 404 in the documentation, and to follow the
general description format of errors in the API documentation.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The `expected` flag was incredibly confusing, as you
couldn't tell from the calling code what you were
actually expecting to happen.
I avoid the context manager idiom in order to force
the callers to create simple helper functions, and
I de-duplicate some code in some places.
I also force the caller to explicitly soft-deactivate
the user with one simple line of code, so that the
person reading the test doesn't have to research
the side effects of the helper. (And I make it
very easy for new authors to follow the practice
going forward.)
This is also somewhat of a prep commit to avoid
the obfuscated use of refresh_from_db.
The get_user function is poorly named, but I don't want to
sweep the entire codebase yet.
It's also nice to have a test wrapper for little experiments
like profiling tests or hunting down calls to refresh_from_db.
It's possible that we would also just change the new wrapper
to more directly call Django. The `get_user` function isn't
used in a ton of real-world places, so we might want the test
code to just bypass the cache.
I add a bunch of cute helper methods to make
the test a bit more readable.
And then I make sure to get clean objects,
which precludes the need for our callback
functions to refresh the user objects.
And finally I make sure that our validation
functions don't cause any round trips (assuming
we have fetched objects using a standard
Zulip helper, which example_user ensures.)
In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.
Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.
So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).
The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.
The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
For arrays of objects in return values of API endpoints, any
general description of the objects in the arrays should be
documented in the description of the array. A description at the
level of the items in the array will not be rendered in the API
documentation. Descriptions of each property of the object will
be rendered, but these are specific to the property and not the
object as a whole.
Updates the pms, streams and huddles arrays of objects included
in the unread_msgs object of the register response so that the
descriptions are at the array level in the OpenAPI documentation.
When unread_msgs data was added to the register queue response, see
commit 4f0110e, the `user_ids_string` field in the `huddles` array
of objects with information about unread group direct messages, had
the user IDs in the string sorted numerically.
Documents that these strings include the current users's ID and are
sorted numerically and separated by commas so that the documentation
is clear for client implementations.
This adds support for syncing user role via the newly added "role"
attribute, which can be set to either of
['owner', 'administrator', 'moderator', 'member', 'guest'].
Removes durable=True from the atomic decorator of do_change_user_role,
as django-scim2 runs PATCH operations in an atomic block.
This is a prep commit to separate the single test
'test_stream_send_message_events' into two separate tests named
'test_stream_send_message_events' & test_stream_update_message_events'
to verify the events related to send and update message, respectively.
As a part of introducing two new user settings
* 'automatically_follow_topics_policy'
* 'automatically_unmute_topics_policy'
in the next commit, we will extend 'test_stream_send_message_events'.
This logical separation helps in avoiding a single, super-long test.
This commit removes the stray values, i.e., [1, 2, 3], used
in the tests for desktop_icon_count_display.
We use 'UserProfile.DESKTOP_ICON_COUNT_DISPLAY_CHOICES' instead.
'test_change_user_settings' in 'UserDisplayActionTest' excludes
the notification settings and tests only the display settings.
The code block excluding the notification settings doesn't exclude
'modern_notification_settings'. It only excludes the
'notification_settings_legacy'.
This commit replaces 'notification_settings_legacy' with
'notification_setting_types', which consists of all the
notification settings.
Expands API changelog feature level 134 entry and adds the related
Changes notes to the events documentation for the updates made in
commit f4fcedd: "stream op: create" and "subscription op: peer_add"
events being sent when a private stream is made public.
Those changes were made after the feature level 133 updates, but
before the feature level 134 updates, which is why 134 is the
feature level for the change that is documented for clients.
In commit ada2991f1c, when a user gains access to a stream due to
a role change, in addition to sending "stream op: create" events,
"subscription op: peer_add" events are sent for streams that the
user gains access to due to their role change. Updates the API
changelog entry for feature level 205.
Updates the "subcription op: peer_add" event documentation to be
more accurate in for the general use cases of this event, which
are to provide updated subscriber information for streams that
a user has access to.
Since the cache is flushed when the cutoff or realm changes, the
maximum size of the cache should cap out at the number of streams in
the realm. Raise the max cache size, now that this will not simply
lead to useless cache space for smaller servers.
There is now no longer any reason to have the scheduled_email
enqueuing wait until all of the users' contexts have been generated.
Switch to returning the contexts as an iterator, and send them as we
compute them.
The query plan for fetching recent messages from the arbitrary set of
streams formed by the intersection of 30 random users can be quite
bad, and can descend into a sequential scan on `zerver_recipient`.
Worse, this work of pulling recent messages out is redone if the
stream appears in the next batch of 30 users.
Instead, pull the recent messages for a stream on a one-by-one basis,
but cache them in an in-memory cache. Since digests are enqueued in
30-user batches but still one-realm-at-a-time, work will be saved both
in terms of faster query plans whose results can also be reused across
batches.
This requires that we pull the stream-id to stream-name mapping for
_all_ streams in the realm at once, but that is well-indexed and
unlikely to cause performance issues -- in fact, it may be faster
than pulling a random subset of the streams in the realm.