Transifex's webhook documentation[^1] describes an `event` parameter
which is used to distinguish which event type was received. Dispatch
based on that, and pass that value to UnsupportedWebhookEventTypeError
if need be.
[^1]: https://developers.transifex.com/docs/webhooks
This commit adds a 'stream_id' parameter to the 'POST /typing'
endpoint.
Now, 'to' is used only for "direct" type. In the case of
"stream" type, 'stream_id' and 'topic' are used.
Earlier, 'MissedMessageHookTest' didn't have 'super().setUp()'
and 'super().tearDown()' in the overrided methods 'setUp' and
'tearDown', respectively, that resulted in cached objects being
used between tests and hence flaky test failures.
This commit adds 'super().setUp()' and 'super().tearDown()'.
Change the url in the notification message to point to the settings
interface rather than linking to the export directly.
This is a much better user experience in the case that the export has
been deleted since the time the export was requested.
Fixes: #26923.
232eb8b7cf changed how these pages work, to render inline instead of
serving from a URL, but did not update the SMTP use case; this made
SMTP failures redirect to a 404.
Two registration requests for the same email address can race,
leading to an IntegrityError when making the second user.
Catch this and redirect them to the login page for their existing
email.
This works around the `/usr/bin/pg_dump` failure described in the
previous commit. Since we are now calling the appropriately-versioned
`pg_dump` binary directly, it is no longer "necessary", but is added
as a defense-in-depth.
`/usr/bin/pg_dump` on Ubuntu and Debian is actually a tool which
attempts to choose which `pg_dump` binary from all of the
`postgresql-client-*` packages that are installed to run. However,
its logic is confused by passing empty `--host` and `--port` options
-- instead of looking at the running server instance on the server, it
instead assumes some remote host and chooses the highest versioned
`pg_dump` which is installed.
Because Zulip writes binary database backups, they are sensitive to
the version of the client `pg_dump` binary is used -- and the output
may not be backwards compatible. Using a PostgreSQL 16 `pg_dump`
writes archive format 1.15, which cannot be read by a PostgreSQL 15
`pg_restore`.
Zulip does not currently support PostgreSQL 16 as a server. This
means that backups on servers with `postgresql-client-16` installed
did not successfully round-trip Zulip backups -- their backups are
written using PostgreSQL 16's client, and the `pg_restore` chosen on
restore was correctly chosen as the one whose version matched the
server (PostgreSQL 15 or below), and thus did not understand the new
archive format.
Existing `./manage.py backups` taken since `postgresql-client-16` were
installed are thus not directly usable by the `restore-backup` script.
They are not useless, however, since they can theoretically be
converted into a format readable by PostgreSQL 15 -- by importing into
a PostgreSQL 16 instance, and re-dumping with a PostgreSQL 15
`pg_dump`.
Fix this issue by hard-coding path to the binary whose version matches
the version of the server we are connected to. This may theoretically
fail if we are connected to a remote PostgreSQL instance and we do not
have a `postgresql-client` package locally installed which matches the
remote PostgreSQL server's version. However, choosing a matching
version is the only way to ensure that it will be able to be imported
cleanly -- and it is preferable that we fail the backup process rather
than write backups that we cannot easily restore from.
Fixes: #27160.
The goal is to reduce load on Sentry if the service is timing out, and
to reduce uwsgi load from long requests. This circuit-breaker is
per-Django-process, so may require more than 2 failures overall before
it trips, and may also "partially" trip for some (but not all)
workers. Since all of this is best-effort, this is fine.
Because this is only for load reduction, we only circuit-breaker on
timeouts, and not unexpected HTTP response codes or the like.
See also #26229, which would move all browser-submitted Sentry
reporting into a single process, which would allow circuit-breaking to
be more effective.
This prevents failure to submit a client-side Sentry trace from
turning into a server-side client trace. If Sentry is down, we merely
log the error to our error logs and carry on.
When the `type` of the message being composed is "stream",
this commit updates the `to` parameter to accept the ID of
the stream in which the message is being typed.
Earlier, it accepted a single-element list containing the ID
of the stream.
Sending the element instead of a list containing the single element
makes more sense.
This is a prep commit that extracts the following two methods
from '/actions/scheduled_messages' to reuse in the next commit.
* extract_stream_id
* extract_direct_message_recipient_ids
The 'to' parameter for 'POST /typing' will follow the same pattern
in the next commit as we currently have for the 'to' parameter in
'POST /scheduled_messages', so we can reuse these functions.
This commit removes the compatibility support for "private"
being a valid value for the 'type' parameter in 'POST /typing'.
"direct" and "stream" are the only valid values.
This commit replaces the value `private` with `direct` in the
`message_type` field for the `typing` events sent when a user
starts or stops typing a message.
This commit includes the message's sender id in the
'topic_participant_user_ids' set.
The 'participants_for_topic' function doesn't include the sender_id,
if the user is sending their first message in the topic, because
'participants_for_topic' queries the 'Message' table, but the message
is actually sent at a later stage in the codepath, resulting in
missing the sender_id in this case.
This is needed to set the 'wildcard_mentioned' flag for the sender's
user message in the case of topic wildcard mentions.
This doesn't lead to sending email and push notifications to the
sender because we have a check to skip notifications if the user
to receive notifications is the sender itself.
This should have been included in c0c30bc.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
Earlier, when we used 'self.send_message()' in the backend tests,
the sent message was not marked as read for the sender.
Reason: To set the read flag, we have to check if
'message.sent_by_human()'. It returns False because the
'sending_client' for tests is "test suite" and the 'sent_by_human'
function doesn't enlist the "test suite" client name as a human client.
This commit adds "test suite" to that list.
Also fixes a bug in when apply_unread_message_event was called that
was revealed by this change.
Instead of having "business" as the default organization type
for demo organizations in the dev environment, we set it to
"unspecified". This way a more generic zulip guide email will
be sent as part of the onboarding process for users invited
to try out the demo organization if the owner has not yet
updated the organization type.
Updates the testing for draft event schemas to be fully checked by
`zerver/tests/test_events.py` and `tools/check-schema`.
Also, corrects the type for the timestamp field in Draft objects
in the OpenAPI documentation.
Updates the testing for scheduled message event schemas to be fully
checked by `zerver/tests/test_events.py` and `tools/check-schema`.
Adds the missing 'failed' field to the scheduled message events
in `web/tests/lib/events.js` as well.
We add `Content-Disposition: inline` header to commonly supported
video MIME types so that when we `Open` them in lightbox, they
play in new tab.
This will require a follow-up database migration to apply to
previously uploaded videos.
This excludes the legacy webhook from the
"realm_incoming_webhook_bots" object as those do not have the same URL
format as modern webhook integrations.
This change adds support for importing guest users from a Mattermost
export file into Zulip. The function now checks the user's teams and
roles to determine whether the user is a guest on the team, and sets
the user's role accordingly. This ensures that the imported user data
includes the correct role for each user.
Fixes#23720.
This fixes a regression introduced in
9954db4b59, where the realm's default
language would be ignored for users created via API/LDAP/SAML,
resulting in all such users having English as their default language.
The API/LDAP/SAML account creation code paths don't have a request,
and thus cannot pull default language from the user's browser.
We have the `realm.default_language` field intended for this use case,
but it was not being passed through the system.
Rather than pass `realm.default_language` through from each caller, we
make the low-level user creation code set this field, as that seems
more robust to the creation of future callers.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
None of these tests seem to want to have tick=True, which is the
default. Letting the clock tick without a reason introduces the
possibility of nondeterministic test failures depending on the execution
time.
This reverts b8581e2895. The mobile
client on Android parses this field using:
```kotlin
timeMs = data.require("time").parseLong("time") * 1000
```
This throws an error if value is not `long` (i.e. an integer),
resulting in dropped notifications on Android from servers which had
deployed b8581e2895.
Switch back to sending an integer, but keep the behaviour from
fd6091ad17 where we send the timestamp in the payload of both
Android and Apple push notifications.
Rather than fetch all UserMessage rows for all streams, and subtract
those out in Python-space from the list of all Message rows the user
may have received -- do this via a "NOT EXISTS" subquery. This is
much better indexed (performing in fractions of milliseconds rather
than hundreds), and also consumes much less memory.
Adds support for bulk-adjusting a single user's membership in multiple
user groups in a single transaction in the low-level actions
functions, for future use by work on #9957.
In commit 3e369bcf9, the `code` field for api/deactivate-own-user
was incorrectly documented as "BAD_REQUEST", which is the code for
the similar error returned by api/deactivate-user.
Corrects the error code to be "CANNOT_DEACTIVATE_LAST_USER" and
adds documentation for the two other fields returned by this
error response.
Note that the descriptions for the fields are added in the error
response schema will not be rendered in our current documentation.
They are rendered in other third-party tools and are therefore
good to have in our OpenAPI documentation. The description that
will be rendered in our documentation is the general error response
schema description and that is also updated for details about the
extra fields in this error response.
This kind of payload that's loaded from json in the body of the request
is not only used for webhooks, but also in the push bouncer, and may get
used elsewhere too - so a general name is better.
Earlier, 'is_row_muted' returned 'true' if the message was in
a muted stream or muted topic.
If the message is in an unmuted or followed topic in a muted
stream, such topics should be treated as not muted topics
in an unmuted stream.
This commit fixes the incorrect behavior.
Now, for wildcard mentions, 'unread_msgs.mentions' exclude
the IDs in muted streams only if the message is in default or
muted topic.
Also, 'unread_msgs.count' takes into account the unreads in unmuted
or followed topics in muted streams too.
Documents that this bug was fixed in the API changelog.
Update 'get_muted_stream_ids' to return a set of IDs
instead of a list.
This will help to avoid linear time search operations later
while using 'if stream_id in muted_streams_ids'.
This prep commit renames the 'build_topic_mute_checker' function
to 'build_get_topic_visibility_policy' and updates it to support
all the visibility policies.
The function prefetches the visibility policies the user has
configured for various topics and prepares a dict named
'topic_to_visibility_policy' to be used later on.
A comment was added in f797604 to convey that the unread count
at that time doesn't exclude the unreads in muted topics.
848c080 added the support to exclude the muted topic;
however, the comment was not updated.
This commit updates the comment to reflect the current behavior.
This is an exception that we should be generally catching like the
others, which will give our standard /login/ redirect and proper logging
- as opposed to a 500 if we don't catch.
Addresses directly a bug we occurred in the wild, where a SAMLResponse
was submitted without issuers specified in a valid way, causing this
exception. The added test tests this specific type of scenario.
These queries benefit from the increased specificity of using the
realm / recipient / sender indexes. The argument from 11a1cb9630
does not apply in these cases, since there are only 2 usermessage rows
for each matching message row for DMs, and few more than that for
huddles.
This query has two halves; messages set by the user, and messages
received by the user. The former uses the already-specific
usermessage privatemessage flag index; the latter relies on the
recipient index on messages.
Add the realm_id to the latter half, so that the recipient_id is
paired with the realm_id.
This commit updates the text for a dropdown option `Unmuted streams`
to `Unmuted streams and topics` for `Show unread counts for` user
preference settings for better clarity.
Clarifies that the `all` field in the `op: "add"` event is only
relevant for the `"read"` message flag, and that it will be false
for all other specified flags in theses events.
Deprecates the `all` field in the `op: "remove"` event and document
that it is false for all specified flags.
Updates the deprecated `operation` field description and makes
a few other small revisions to the event text for clarity and
accuracy.
This commit adds a `jitsi_server_url` field to the Realm model, which
will be used to save the URL of the custom Jitsi Meet server. In
the database, `None` will encode the server-level default. We can't
readily use `None` in the API, as it could be confused with "field not
sent". Therefore, we will use the string "default" for this purpose.
We have also introduced `server_jitsi_server_url` in the `/register`
API. This will be used to display the server's default Jitsi server
URL in the settings UI.
The existing `jitsi_server_url` will now be calculated as
`realm_jitsi_server_url || server_jitsi_server_url`.
Fixes a part of #17914.
Co-authored-by: Gaurav Pandey <gauravguitarrocks@gmail.com>
The unique index on `(user_id, message_id)` that is the
`zerver_usermessage` table is rather specific, and even the PostgreSQL
extended statistics are not enough for it to realize there is a
correlation between the `realm_id` in the message table and the
`user_id` in the usermessage table. This means that adding the
`realm_id` limit when there is a join to `zerver_usermessage` flips
the query plan from a nested loop of unique usermessage index-only
scan, with an index scan of the messages pkey -- to a parallel hash
join of the messages limit with a index scan of just the user_id limit
on usermessages. It thinks this is necessary because it thinks that
the `realm_id` limit may remove a large number of messages from the
usermessage set -- which is totally untrue.
Remove the `realm_id` limit if we have a usermessage join.
Removes the JsonErrorBase and JsonError schemas as all error
responses in the API docs use the CodedErrorBase or CodedError
schemas.
Removes the AddSubscriptionsResponse schema since it's no longer
incorrectly used as a shared schema for error responses, and
instead documents the specific success response properties in the
endpoint.
Adds an InvalidStreamError schema for errors that return a 'msg'
field with the string: "Invalid stream ID". Updates endpoints that
have this error 'str' documented to use the shared schema.
Updates documentation of ResourceNotFoundErrors for unknown draft
and scheduled message IDs to include the 'code' field, have an
HTTP status code of 404 in the documentation, and to follow the
general description format of errors in the API documentation.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The `expected` flag was incredibly confusing, as you
couldn't tell from the calling code what you were
actually expecting to happen.
I avoid the context manager idiom in order to force
the callers to create simple helper functions, and
I de-duplicate some code in some places.
I also force the caller to explicitly soft-deactivate
the user with one simple line of code, so that the
person reading the test doesn't have to research
the side effects of the helper. (And I make it
very easy for new authors to follow the practice
going forward.)
This is also somewhat of a prep commit to avoid
the obfuscated use of refresh_from_db.
The get_user function is poorly named, but I don't want to
sweep the entire codebase yet.
It's also nice to have a test wrapper for little experiments
like profiling tests or hunting down calls to refresh_from_db.
It's possible that we would also just change the new wrapper
to more directly call Django. The `get_user` function isn't
used in a ton of real-world places, so we might want the test
code to just bypass the cache.
I add a bunch of cute helper methods to make
the test a bit more readable.
And then I make sure to get clean objects,
which precludes the need for our callback
functions to refresh the user objects.
And finally I make sure that our validation
functions don't cause any round trips (assuming
we have fetched objects using a standard
Zulip helper, which example_user ensures.)
In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.
Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.
So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).
The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.
The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
For arrays of objects in return values of API endpoints, any
general description of the objects in the arrays should be
documented in the description of the array. A description at the
level of the items in the array will not be rendered in the API
documentation. Descriptions of each property of the object will
be rendered, but these are specific to the property and not the
object as a whole.
Updates the pms, streams and huddles arrays of objects included
in the unread_msgs object of the register response so that the
descriptions are at the array level in the OpenAPI documentation.
When unread_msgs data was added to the register queue response, see
commit 4f0110e, the `user_ids_string` field in the `huddles` array
of objects with information about unread group direct messages, had
the user IDs in the string sorted numerically.
Documents that these strings include the current users's ID and are
sorted numerically and separated by commas so that the documentation
is clear for client implementations.
This adds support for syncing user role via the newly added "role"
attribute, which can be set to either of
['owner', 'administrator', 'moderator', 'member', 'guest'].
Removes durable=True from the atomic decorator of do_change_user_role,
as django-scim2 runs PATCH operations in an atomic block.
This is a prep commit to separate the single test
'test_stream_send_message_events' into two separate tests named
'test_stream_send_message_events' & test_stream_update_message_events'
to verify the events related to send and update message, respectively.
As a part of introducing two new user settings
* 'automatically_follow_topics_policy'
* 'automatically_unmute_topics_policy'
in the next commit, we will extend 'test_stream_send_message_events'.
This logical separation helps in avoiding a single, super-long test.
This commit removes the stray values, i.e., [1, 2, 3], used
in the tests for desktop_icon_count_display.
We use 'UserProfile.DESKTOP_ICON_COUNT_DISPLAY_CHOICES' instead.
'test_change_user_settings' in 'UserDisplayActionTest' excludes
the notification settings and tests only the display settings.
The code block excluding the notification settings doesn't exclude
'modern_notification_settings'. It only excludes the
'notification_settings_legacy'.
This commit replaces 'notification_settings_legacy' with
'notification_setting_types', which consists of all the
notification settings.
Expands API changelog feature level 134 entry and adds the related
Changes notes to the events documentation for the updates made in
commit f4fcedd: "stream op: create" and "subscription op: peer_add"
events being sent when a private stream is made public.
Those changes were made after the feature level 133 updates, but
before the feature level 134 updates, which is why 134 is the
feature level for the change that is documented for clients.
In commit ada2991f1c, when a user gains access to a stream due to
a role change, in addition to sending "stream op: create" events,
"subscription op: peer_add" events are sent for streams that the
user gains access to due to their role change. Updates the API
changelog entry for feature level 205.
Updates the "subcription op: peer_add" event documentation to be
more accurate in for the general use cases of this event, which
are to provide updated subscriber information for streams that
a user has access to.
Since the cache is flushed when the cutoff or realm changes, the
maximum size of the cache should cap out at the number of streams in
the realm. Raise the max cache size, now that this will not simply
lead to useless cache space for smaller servers.
There is now no longer any reason to have the scheduled_email
enqueuing wait until all of the users' contexts have been generated.
Switch to returning the contexts as an iterator, and send them as we
compute them.
The query plan for fetching recent messages from the arbitrary set of
streams formed by the intersection of 30 random users can be quite
bad, and can descend into a sequential scan on `zerver_recipient`.
Worse, this work of pulling recent messages out is redone if the
stream appears in the next batch of 30 users.
Instead, pull the recent messages for a stream on a one-by-one basis,
but cache them in an in-memory cache. Since digests are enqueued in
30-user batches but still one-realm-at-a-time, work will be saved both
in terms of faster query plans whose results can also be reused across
batches.
This requires that we pull the stream-id to stream-name mapping for
_all_ streams in the realm at once, but that is well-indexed and
unlikely to cause performance issues -- in fact, it may be faster
than pulling a random subset of the streams in the realm.
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is designed to help PostgreSQL have better specificity and
locality in its indexes. Subsequent commits will adjust the code to
make sure that we use these indexes rather than the `realm_id`-less
versions.
We do not add a `realm_id` variation to the full-text index, since
it is a GIN index; multi-column GIN indexes are not terribly
performant, require the `btree_gin` extension for `int` types (which
requires superuser privileges on PostgreSQL 12 and earlier), and
cannot be consistently added concurrently on running instances.
After all indexes have been made, we also run `CREATE STATISTICS` in
order to give PostgreSQL the opportunity to realize that recipient and
sender are highly correlated with message realm, allowing it to
estimate that `(realm_id, recipient_id)` is likely as specific as
matching a given `recipient_id`, instead of as likely as matching
`realm_id` times matching a `recipient_id`. Finally, those statistics
must be filled by `ANALYZE zerver_message`, which is run last.
Matching the topic exactly, as opposed to case-insensitively, is not a
common operation, and one that we want to make difficult to do
accidentally. Inline the single use case of it.
We now have a `realm_id` on Message; use it, rather than having to
check the sender's realm. This is theoretically different for
cross-realm bots, but these changes are all in tests where that does
not apply.
This algorithm existed in multiple places, with different queries.
Since we only access properties in the UserMessage table, we
standardize on the much simpler and faster Index Only Scan, rather
than a merge join.
When searching for links inside a topic name, the question mark (?)
was used to split the topic. If a URL had a query after the URL
(e.g., "?foo=bar"), then the query was trimmed from the URL.
Removing the question mark from `basic_link_splitter` is sufficient
to fix this issue. The `get_web_link_regex` function then removes
the trailing punctuation if any, including literal question marks.
Fixes#26368.
When there was no space right after `/todo` but there was content on a
new line, the message would be rendered plainly, not as a todo widget.
This was because we split on only the space character to then check if
the first token was a valid widget.
Now we split on both spaces and newlines to extract the widget name,
irrespective of whether it is followed by a space or a newline. This
results in the message being rendered as a todo widget as expected.
Rename existing shortened references to demo organizations, like
`is_demo_org` or `demo-org-warning`, that have been used in the
codebase so far and replace them to be like the `models.py`
variable: `Realm.demo_organization_scheduled_deletion_date`.
This REDOS was not exploitable, as its content is only read from
checked-in files; regardless, simplify it to not backtrack. We also
do not actually have any location which use leading or trailing
whitespace, so remove those optional bits.
Our logic for extracting strings from templates did not properly
handle the syntax for code containing whitespace control characters,
resulting in a couple strings from subscribe_to_more_streams.hbs not
being processed.
The Librato webhook requires a mapping (which should be considered
immutable) with a default value. Ruff reports a false-positive due to
the Json wrapper.
Instead of a WildValue, the JSON/Sentry webhook expect the request body to be a
dict.
For the JSON webhook, json.dumps accepts other types of input as well and the
constraint is not necessary, but this serve as a good example of an alternative
use of WebhookPayload to describe a payload that is intended to be parsed from
the entire request body from JSON, into a type other than WildValue.
Transifex has parameters that need to be parsed from JSON and converted
to int. Note that we use Optional[Json[int]] instead of
Json[Optional[int]] to replicate the behavior of json_validator. This
caveat is explained in a new test called test_json_optional.
These webhooks have some URL query params that do not need additional
validation or parsing from JSON. So WebhookPaylaod is not applicable to
these webhooks.
This converts most webhook integration views to use @typed_endpoint instead
of @has_request_variables, rewriting REQ parameters. For these
webhooks, it simply requires switching the decorator, rewriting the
type annotation of payload/message to WebhookPayload[WildValue], and
removing the REQ default that defines the to_wild_value converter.
This function is used by almost all webhooks.
To support it, we use the "api_ignore_parameter" flag so that positional
arguments like topic and body that are not intended to be parsed from
the request can be ignored.
This demonstrates the use of BaseModel to replace a check_dict_only
validator.
We also add support to referring to $defs in the OpenAPI tests. In the
future, we can descend down each object instead of mapping them to dict
for more accurate checks.
This demonstrates some basic use cases of the Json[...] wrapper with
@typed_endpoint.
Along with this change we extend test_openapi so that schema checking
based on function signatures will still work with this new decorator.
Pydantic's TypeAdapter supports dumping the JSON schema of any given type,
which is leveraged here to validate against our own OpenAPI definitions.
Parts of the implementation will be covered in later commits as we
migrate more functions to use @typed_endpoint.
See also:
https://docs.pydantic.dev/latest/api/type_adapter/#pydantic.type_adapter.TypeAdapter.json_schema
For the OpenAPI schema, we preprocess it mostly the same way. For the
parameter types though, we no longer need to use
get_standardized_argument_type to normalize type annotation, because
Pydantic dumps a JSON schema that is compliant with OpenAPI schema
already, which makes it a lot convenient for us to compare the types
with our OpenAPI definitions.
Do note that there are some exceptions where our definitions do not match
the generated one. For example, we use JSON to parse int and bool parameters,
but we don't mark them to use "application/json" in our definitions.
We want to reject ambiguous type annotations that set ApiParamConfig
inside a Union. If a parameter is Optional and has a default of None, we
prefer Annotated[Optional[T], ...] over Optional[Annotated[T, ...]].
This implements a check that detects Optional[Annotated[T, ...]] and
raise an assertion error if ApiParamConfig is in the annotation. It also
checks if the type annotation contains any ApiParamConfig objects that
are ignored, which can happen if the Annotated type is nested inside
another type like List, Union, etc.
Note that because
param: Annotated[Optional[T], ...] = None
and
param: Optional[Annotated[Optional[T], ...]] = None
are equivalent in runtime prior to Python 3.11, there is no way for us
to distinguish the two. So we cannot detect that in runtime.
See also: https://github.com/python/cpython/issues/90353
The goal of typed_endpoint is to replicate most features supported by
has_request_variables, and to improve on top of it. There are some
unresolved issues that we don't plan to work on currently. For example,
typed_endpoint does not support ignored_parameters_supported for 400
responses, and it does not run validators on path-only arguments.
Unlike has_request_variables, typed_endpoint supports error handling by
processing validation errors from Pydantic.
Most features supported by has_request_variables are supported by
typed_endpoint in various ways.
To define a function, use a syntax like this with Annotated if there is
any metadata you want to associate with a parameter, do note that
parameters that are not keyword-only are ignored from the request:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: Annotated[int, ApiParamConfig(path_only=True)],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=NTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are also some shorthands for the commonly used annotated types,
which are encouraged when applicable for better readability and less
typing:
```
WebhookPayload = Annotated[Json[T], ApiParamConfig(argument_type_is_body=True)]
PathOnly = Annotated[T, ApiParamConfig(path_only=True)]
```
Then the view function above can be rewritten as:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: PathOnly[int],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=INTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are some intentional restrictions:
- A single parameter cannot have more than one ApiParamConfig
- Path-only parameters cannot have default values
- argument_type_is_body is incompatible with whence
- Arguments of name "request", "user_profile", "args", and "kwargs" and
etc. are ignored by typed_endpoint.
- positional-only arguments are not supported by typed_endpoint. Only
keyword-only parameters are expected to be parsed from the request.
- Pydantic's strict mode is always enabled, because we don't want to
coerce input parsed from JSON into other types unnecessarily.
- Using strict mode all the time also means that we should always use
Json[int] instead of int, because it is only possible for the request
to have data of type str, and a type annotation of int will always
reject such data.
typed_endpoint's handling of ignored_parameters_unsupported is mostly
identical to that of has_request_variables.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Commit cf0eb46afc added this to let
Django understand the CREATE INDEX CONCURRENTLY statement that had
been hidden in a RunSQL query in migration 0244. However, migration
0245 explained that same index to Django in a different way by setting
db_index=True. Move that to 0244 where the index is actually created,
using SeparateDatabaseAndState.
Also remove the part of the SQL in 0245 that was mirrored by dummy
state_operations, and replace it with real operations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is important because the "guests" value isn't one that we'd
expect anyone to pick intentionally, and in particular isn't an
available option for the similar/adjacent "email invitations" setting.
Earlier whenever a new invitation is created a event was sent
to only admin users. So, if invites by a non-admins user are changed
the invite panel does not live update.
This commit makes changes to also send event to non-admin
user if invites by them are changed.
This commit rename the existing setting `Who can invite users to this
organization` to `Who can send email invitations to new users` and
also renames all the variables related to this setting that do not
require a change to the API.
This was done for better code readability as a new setting
`Who can create invite links` will be added in future commits.
This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
This commit adds id_field_name field to GroupPermissionSetting
type which will be used to store the string formed by concatenation
of setting_name and `_id`.
This was already enforced via separate logic that requires an owner to
invite an owner, but it makes the intent of the code a lot more clear
if we don't have this value mysteriously absent.
Earlier there was a function to check if owner is
required to create invitations for the role specified
in invite and check for administrator was done
without any function call.
This commit adds a new function to check whether
owner or administrator is required for creating
invitations for the specified role and
refactors the code to use that new function.
This commit makes the database changes while creating internal_realm
to be done in a single transaction.
This is needed for deferring the foreign key constraints
to the end of transaction.
Previously (with ERROR_REPORTING = True), we’d stuff the entire
traceback of the initial exception into the subject line of an error
email, and then also send a separate email for the JSON 500 response.
Instead, log one error with the standard Django format.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Rewrite the test so that we don't have a dedicated URL for testing.
dev_update_subgroups is called directly from the tests without using the
test client.
'test_get_message_payload_gcm_stream_message' verifies the payload
for notifications generated (for stream messages) due to any of the
push notification triggers, including
'NotificationTriggers.STREAM_PUSH'.
Earlier, 'test_get_message_payload_gcm_stream_notifications' tested
the same thing as 'test_get_message_payload_gcm_stream_message' with
the only difference that it included content that was not truncated.
This commit removes the test
'test_get_message_payload_gcm_stream_notifications' and updates
the test 'test_get_message_payload_gcm_stream_message' to cover
both the cases, i.e., truncated as well as not truncated content.
This commit removes the 'alert' field from the payload for
Android via GCM/FCM.
The alert strings generated do not get used at all and have
not been used since at least 2019. On Android, we construct
the notification UI ourselves in the client, and we ignore
the alert string.
Creates process for demo organization owners to add an email address
and password to their account.
Uses the same flow as changing an email (via user settings) at the
beginning, but then sends a different email template to the user
for the email confirmation process.
We also encourage users to set their full name field in the modal for
adding an email in a demo organization. We disable the submit button
on the form if either input is empty, email or full name.
When the user clicks the 'confirm and set password' button in the
email sent to confirm the email address sent via the form, their
email is updated via confirm_email_change, but the user is redirected
to the reset password page for their account (instead of the page for
confirming an email change has happened).
Once the user successfully sets a password, then they will be
prompted to log in with their newly configured email and password.
Since an email address is not required to create a demo organization,
we need a Zulip API email address for the web-app to use until the
owner configures an email for their account.
Here, we set the owner's `email_address_visibility` to "Nobody" when
the owner's account is created so that the Zulip API email field in
their profile is a fake email address string.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
Updates the API error response when there is an unknown or
deactivated user in the `principals` parameter for either the
`/api/subscribe` or `/api/unsubscribe` endpoints. We now use
the `access_user_by_email` and `access_user_by_id` code paths,
which return an HTTP response of 400 and a "BAD_REQUEST" code.
Previously, an HTTP response of 403 was returned with a special
"UNAUTHORIZED_PRINCIPAL" code in the error response. This code
was not documented in the API documentation and is removed as
a potential JsonableError code with these changes.
Fixes#26593.
Updates API changelog entries for feature level 205 for minor
revisions and the addition of help center links. Also, revises
the Changes notes for the stream creation and deletion events
for the same feature level.
The 'startinline' option is utilized in the `CodeHilite` instantiation
to indicate that the provided PHP code snippet should be highlighted
even if it doesn't start with the opening tag or marker of the
associated programming language (which will rarely be the case in
Zulip, since one discusses a section of a file much more often than a
whole file).
Fixes: https://chat.zulip.org/#narrow/stream/137-feedback/topic/php.20syntax.20highlighting.20should.20not.20require.20.60.3C.3Fphp.60.
Signed-off-by: Akshat <akshat25iiit@gmail.com>
This commit adds a test to verify the payload
'get_message_payload_apns' returns when the notification trigger is
'NotificationTriggers.FOLLOWED_TOPIC_PUSH'.
This commit updates the 'get_apns_alert_subtitle' function to
return a common subtitle, i.e., "{full_name} mentioned everyone:"
for wildcard mentions.
The triggers for the stream or topic wildcard mentions include:
* NotificationTriggers.TOPIC_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.STREAM_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.TOPIC_WILDCARD_MENTION
* NotificationTriggers.STREAM_WILDCARD_MENTION
This PR implements the audio call feature for Zoom. This is done by explicitly
telling Zoom to create a meeting where the host's video and participants' video
are off by default.
Another key change is that when creating a video call, the host's and
participants' video will be on by default. The old code doesn't specify that
setting, so meetings actually start with video being off. This new behavior has
less work for users to do. They don't have to turn on video when joining a call
advertised as "video call". It still respects users' preferences because they
can still configure their own personal setting that overrides the meeting
defaults.
The Zoom API documentation can be found at
https://developers.zoom.us/docs/api/rest/reference/zoom-api/methods/#operation/meetingCreateFixes#26549.
Adds a Changes note for when `other_user_id` was added to the `pms`
object.
Changes a few uses of "you" to be "current user" instead.
Clarifies type of direct message (one-on-one or group) and that
messages are unread messages.
Fixes the field in both the pms and huddles objects to be correctly
documented as `unread_message_ids`, instead of `message_ids`.
The documentation of the similar field in the stream object of
`unread_msgs` was corrected in commit 27ddb554fb.
We now send stream creation and stream deletion events on
changing a user's role because a user can gain or lose
access to some streams on changing their role.
This commit extracts the code which queries the required streams
to a new function "get_user_streams". The new functions returns
the list of "Stream" object and not dictionaries and then
do_get_streams function converts it into list of dictionaries.
This change is important because we would use the new function
in further commit where we want list of "Stream" objects and
not list of dictionaries.
There was a bug in apply_event code where only a stream which
is not private is added to the "never_subscribed" data after
a stream creation event. Instead, it should be added to the
"never_subscribed" data irrespective of permission policy of
the stream as we already send stream creation events only to
those users who can access the stream. Due to the current
bug, private streams were not being added to "never_subscribed"
data in apply_event for admins as well. This commit fixes it
and also makes sure the "never_subscribed" list is sorted
which was not done before and was also a bug.
The bugs mentioned above were unnoticed as the tests did not
cover these cases and this commit also adds tests for those
cases.
The "streams" field in "/register" response did not include web-public
streams for non-admin users but the data for those are eventually
included in the subscriptions data sent using "subscriptions",
"unsubscribed" and "never_subscribed" fields.
This commit adds code to include the web-public streams in "streams"
field as well as everyone can access those and will make the "streams"
data complete.
The name and docstring were just wrong, having a UserMessage row isn't
sufficient for having message access and is actually only relevant in a
private stream with private history. The function is only used in a
single place anyway, in bulk_access_messages.
The comment mentioning this function in handle_remove_push_notification
can be tweaked to just not mention any function specifically and just
say why we're not checking message access.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
Updates the main description of the `api/set-typing-status` endpoint
for the new fields in the register response for the typing start,
stop, expired time intervals. Previously these were hardcoded by
the client side code and not the server side code.
Also updates the developer documentation for typing indicators in
the subsystems docs. This refreshes a few parts of that doc that
were already out of date, as well as adds the information about
the new register response fields noted above.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Fixes#11767.
Previously multi-character emoji sequences weren't matched in the
emoji regex, so we'd convert the characters to separate images,
breaking the intended display.
This change allows us to match the full emoji sequence, and
therefore show the correct image.
We have modified the code to directly fetch realm from Message
object instead of "sender" field and thus we no longer need to
fetch "sender__realm" using select_related.
There is no need to get realm for sender as ScheduledMessage
object also has realm field.
There is no direct benefit of this change but it is nice to
maintain the pattern which we want to follow in the code
in tests as well.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We do not set realm to Message objects defined for markdown tests
and this works because we currently access realm from sender object.
This commit changes the code to set realm in Message objects as
we would be accessing realm from Message object directly in further
commits.
Previously, if a user tried to create a webhook using the Webhooks
plugin in Sentry and used the "Test plugin" to test the webhook,
the server would send a 500 error, even though the integration
worked perfectly. This led users to believe that the integration
was not working.
Fixes#26173.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
Almost all users of JsonSuccessBase seem to also include
SuccessDescription. /server_settings used a different description from
the rest of the JsonSuccessBase users, but the difference is small
enough that using the generic description of the former
SuccessDescription is fine.
Migrates existing ScheduledEmails for onboarding emails that have
either "zerver/emails/followup_day1" or "zerver/emails/followup_day2"
as the email template prefix to instead use the new template
prefixes "zerver/emails/account_registered" and
"zerver/emails/zulip_onboarding_topics".
Now that we're using the new templates for the onboarding emails,
remove "followup_day1" and "followup_day2" from the EMAIL_TYPES
that are used for scheduled emails.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
In servers with `application_server.http_only = true` and
`loadbalancer.ips` set, the DetectProxyMisconfiguration middleware
prevents access over HTTP from IP addresses other than the
loadbalancer.
However, this misses the case of access from localhost over HTTP,
which is safe and expected -- for instance, the `email-mirror-postfix`
script used in the email gateway[^1] will post to `http://localhost/`
by default in such configurations. With the
DetectProxyMisconfiguration installed, this will result in a 403
response.
Make an exception for requests from `127.0.0.1` and `::1` from
proxy-misconfiguration rejections.
[^1]: https://zulip.readthedocs.io/en/latest/production/email-gateway.html
Removes a response example in the `POST users/me/subscriptions`
documentation that was listed as a 400 error response. It is
actually a variation on the success response for this endpoint.
The current rendering of our API documentation is not set up to
support `"anyOf"` which would allow for validating examples that
match multiple response schemas.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the private stream suscriptions of the bot if the
original owner is deactivated and we change the owner to the user who
is reactivating the bot. We unsusbcribe the bot from private streams
that the new owner is not subscribed to.
Fixes part of #21700.
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
Fixes the `/api/register-queue` endpoint documentation so that the
`realm_emoji` has the correct type, object that contains objects.
By correcting the API documentation, we also fix an error in the
test for the events system, which had been relying on the API
documentation having a list as a possible type for `realm_emoji`
in the register response.
Earlier, for topic wildcard mentions, the 'wildcard_mentioned'
flag was set for all the user-messages. (similar to stream wildcard
mention).
The flag should be set for the topic participants only.
The bug was introduced in 4c9d26c.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
Currently, we are displaying the "Complete the organization profile"
banner immediately after the organization was created. It's important to
strongly encourage orgs to configure their profile, so we should delay
showing the banner if the profile has not been configured after 15 days.
Thus also allows the users to check out Zulip and see how it works before
configuring the organization settings.
Fixes: #24122.
Previoulsy, test_openapi_arguments had assumed that an endpoint
not using rest_dispatch used the GET method for the request. This
was not the case for the "/fetch_api_key" and "/dev_fetch_api_key"
endpoints, which is why those endpoints were marked as pending
even though they were documented in `zerver/openapi/zulip.yaml`.
Updates test_openapi_arguments to check a set of endpoints that
are documented and don't use the GET method so that these endpoints
can be tested and removed from the pending_endpoints set.
This adds API support to reorder linkifiers and makes sure that the
returned lists of linkifiers from `GET /events`, `POST /register`, and
`GET /realm/linkifiers` are always sorted with the order that they
should processed when rendering linkifiers.
We set the new `order` field to the ID with the migration. This
preserves the order of the existing linkifiers.
New linkifiers added will always be ordered the last. When reordering,
the `order` field of all linkifiers in the same realm is updated, in
a manner similar to how we implement ordering for
`custom_profile_fields`.
The curl examples of reordering linkifiers require there to be some
linkifiers in the database to be reordered. This adjusts some test cases
so they do not assume that there is no linkifier in the test db.
Each unittest subTest can fail without interrupting the other subTests.
By wrapping the test for each view function, we can get all validation
errors at once, which can be useful if multiple endpoints are updated.
More importantly, if the test fails anywhere inside test_openapi but
before the formatted output is printed, we will not lose the information
of which view function fails the validation. Because we attach the name
of the function to the subTest:
```
FAIL: test_openapi_arguments (zerver.tests.test_openapi.OpenAPIArgumentsTest) [zerver.views.alert_words.add_alert_words]
```
The number of affected objects may be quite high, and they are
selected by `id IN (...)` query, and updated with a giant `CASE`.
This turns out to be quadratic, and can cause large queries to take
hours, in a state where they cannot be terminated, when PostgreSQL >11
tries to JIT the query.
Set a batch_size as a stopgap performance fix before moving to
`.update()` as a real fix.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
We want to phase out the use of get_display_recipient
for streams, and this is the last place that I
eliminate it. The next commit will eliminate the
dead code and make mypy types tighter.
This change will make push notifications slightly
slower in some situations, but we avoid all the
complexity of a cache, and this code tends to run
offline.
We could always make this code a bit more efficient
by being a little smarter about what data we fetch
up front. For example, get_apns_alert_title gets
called by a function that already has the stream
name. It's just a bit of a pain to refactor when
you have all the DM codepath mucked up with the
stream codepath.
We generally want to avoid extra moving parts when we
stringify objects. We also want to phase out the use
of get_display_recipient for streams.
Note that we still hit get_display_recipient to
stringify DM and huddle objects, and it's kind of ugly
how we do it, but that's outside the scope of my
current PR.
There's no need for the complexity and extra round
trips to call get_display_recipient in a testing
context.
We also eliminate the unnecessary call to check_string.
This function is poorly named, but that's a sweep
for another day.
The get_display_recipient helper is a clumsy way to get
stream names, and it's not even representative of how
most of our code retrieves stream names.
The new helper also double-checks that the Stream
object has the correct recipient id.
We no longer have to reason about the 12 possible
ways of invoking get_narrow_url. We also avoid
double computation in a couple places.
Finally, we get stricter type checks by just inlining
the calls.
We don't need to call get_display_recipient for
non-stream messages.
I will rename display_recipient in the next commit;
if I were to combine the steps the diff would be too
hard to read.
This commit renames the keyword 'pm' to 'dm' in the
'pm_mention_email_disabled_user_ids' and
'pm_mention_push_disabled_user_ids' attributes of the
'RecipientInfoResult' dataclass.
'pm' and 'dm' are the acronyms for 'private message' and
'direct message' respectively.
It includes 'TODO/compatibility' code to support the old format
fields in the tornado queues during the Zulip server upgrades.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.
This commit adds 'TODO/compatibility' code to support the
old notification trigger values in the push notification queue
during the Zulip server upgrades.
In f4fa82e, we renamed the following notification triggers:
* 'wildcard_mentioned' to 'stream_wildcard_mentioned'
* 'followed_topic_wildcard_mentioned' to
'stream_wildcard_mentioned_in_followed_topic'.
This should have been added in f4fa82e.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Subscription objects such that only
the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for MissedMessageEmailAddress such that
only the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit updates the code to pass "realm" and "recipient" as
arguments to select_related call in get_stream_by_id_in_realm.
Previously, since there was no arguments, it fetched
can_remove_subscribers_group and the related fields of
"Realm" model as well which were not being used, but
did not fetch "recipient" as it is a nullable field.
We only need ID of the recipient and not the full object, so we
directly access ID using "stream.recipient_id" instead of using
the complete recipient object.
This commit removes get_huddle_recipient function and we now use
get_or_create_huddle in get_recipient_from_user_profiles.
As a result of this change, we do not fetch the recipient from
Huddle object but instead get it using the "id" and "recipient_id"
fields available from Huddle object like we do for a personal
message. This change allows us to not fetch recipient object
using select_related when querying the Huddle object.
We now fetch recipient object when querying "Huddle" object in
get_or_create_huddle_backend as this query is eventually used
to get the recipient object only in get_huddle_recipient.
This commit also updates the select_related call in the code to
populate Huddle objects in cache to pass "Recipient" as argument.
Previously no argument was passed to select_related and thus no
related objects were being fetched, with no non-null related fields
being present.
- Adds instructions block and relative link to Starred messages.
- Adds "Toggle starred messages counter" subheading.
- Adds "Searching for messages" as a related article.
Makes a few updates to the text to match current API documentation
styles.
Updates the endpoint example to have accurate stream objects that
are returned in the response.
Fixes formatting of link in feature level 199 changeog entry.
Updates stream object examples for the `stream_weekly_traffic`
field added in feature level 199.
This allows us to not have to keep extending the tool for every
one-off use case and set of users; we build a pipeline to generate the
appropriate JSON file, write a template which uses the data it
provides, and run the tool with them together.
The set of objects in the `users` object can be very large (in some
cases, literally every object in the database) and making them into a
giant `id in (...)` to handle the one tiny corner case which we never
use is silly.
Switch the `--users` codepath to returning a QuerySet as well, so it
can be composed. We pass a QuerySet into send_custom_email as well,
so it can ensure that the realm is `select_related` in as well, no
matter how the QuerySet was generated.
Substituting the rendered body via Jinja2 means that it cannot
perform any interpolation itself. While the string replacement is
hacky, it is the only solution which avoids running Jinja2 more than
once, and also allows the user-supplied content to have Jinja2
substitutions in it.
In this commit, we introduce a new option in the stream creation
UI - a 'Default stream for new users' checkbox. By default, the
checkbox is set to 'off' and is only visible to admins. This
allow admins to easily designate a stream as the default stream
for new users during stream creation.
Fixes#24048.
This commit adds a 'Default stream for new users' checkbox in
the stream editing UI to allow admins to easily add or remove
a stream as the default stream for new users. Previously, this
functionality required navigating to separate menu.
Fixes a part of #24048.
Adds a test for when a value for a user's custom profile field is
removed and not set to a new value. The omission of this event in
the tests was noted as a possibility in #22103, which updated the
API documentation for these events having `null` for the field
value.
When adding the test discovered that the events logic was not
deleting the field from the user object and instead setting it to
`None`, so fixes that logic as well. There was a similar bug fixed
in commit 96c61a1a41 for when custom profile fields are removed
from a realm.
When applying realm_user update events, some of the event fields
for the person object were being updated to the same value in a
loop. Unnests those calls from the loop over the existing fields
so that they are only updated once.
The original nesting was introduced in commit 649fccde6b and
was expanded in other additions to the logic for these events.
This commit explicitly sets the following user settings:
* 'enable_followed_topic_email_notifications'
* 'enable_followed_topic_push_notifications'
to True.
Collectively, this improves the readability of the test and
the following two tests.
In 'test_copy_default_settings_from_another_user', we verify that
'cordelia' and 'iago' have the same values for their user settings,
but 'hamlet' has the defaults.
Earlier, we explicitly set the 'color_scheme' setting for 'hamlet' as
'UserProfile.COLOR_SCHEME_NIGHT', which is not needed.
As we verify, 'hamlet' should have the defaults.
So just verifying if the 'color_scheme' setting for 'hamlet' is
'UserProfile.COLOR_SCHEME_AUTOMATIC' (default) fulfils our purpose.
The extra line of code was introduced in b10f156.
This commit adds code to pass stream traffic data using
the "stream_weekly_traffic" field in stream objects.
We already include the traffic data in Subscription objects,
but the traffic data does not depend on the user to stream
relationship and is stream-only information, so it's better
to include it in Stream objects. We may remove the traffic
data and other stream information fields for Subscription
objects in future.
This will help clients to correctly display the stream
traffic data in case where client receives a stream
creation event and no subscription event, for an already
existing stream which the user did not have access to before.
This commit improves the description for stream_weekly_traffic
field in API documentation to make it clear to the readers about
how to interpret the value.
This commit changes the code to not use get_client_data
function and instead use `stream_to_dict` function to
get the stream data in a dictionary form. This is a
prep commit add stream traffic data to Stream objects.
This commit adds stream_to_dict method which is same as
Stream.to_dict method as of now. This is a prep commit
to include stream traffic data in stream objects.
Earlier while changing group level group based settings
there was no check if the new value for setting is same as
the current value.
This commit adds this check now a setting value will be only
changed when it is not equal to present value.
Previously, this code:
```python3
old_archived_attachments = ArchivedAttachment.objects.annotate(
has_other_messages=Exists(
Attachment.objects.filter(id=OuterRef("id"))
.exclude(messages=None)
.exclude(scheduled_messages=None)
)
).filter(messages=None, create_time__lt=delta_weeks_ago, has_other_messages=False)
```
...protected from removal any ArchivedAttachment objects where there
was an Attachment which had _both_ a message _and_ a scheduled
message, instead of _either_ a message _or_ a scheduled message.
Since files are removed from disk when the ArchivedAttachment rows are
deleted, this meant that if an upload was referenced in two messages,
and one was deleted, the file was permanently deleted when the
ArchivedMessage and ArchivedAttachment were cleaned up, despite being
still referenced in live Messages and Attachments.
Switch from `.exclude(messages=None).exclude(scheduled_messages=None)`
to `.exclude(messages=None, scheduled_messages=None)` which "OR"s
those conditions appropriately.
Pull the relevant test into its own file, and expand it significantly
to cover this, and other, corner cases.
I move the helper user_ids_to_users to the only
place that it's used, and then I simplify it to
do a direct database query.
These endpoints aren't hit often enough to justify
caching complexity, and for really large user groups,
hitting the cache can actually be counterproductive.
Particularly when you add new users to an existing
group, the bulk of the cost is sending out
notification messages to users.
The only change to the test is that I added an
assertion on the query count.
The most expensive thing for adding user groups is sending
all the notification messages, but we at least want to make
sure that the basic stuff runs in constant time.
The cross-realm bots rarely change, and there are only
a few of them, so we just query them all at once and
put them in the cache.
Also, we put the dictionaries in the cache, instead of
the user objects, since there is nothing time-sensitive
about the dictionaries, and they are small. This saves
us a little time computing the avatar url and things
like that, not to mention marshalling costs.
This commit also fixes a theoretical bug where we would
have stale cache entries if somebody somehow modified
the cross-realm bots without bumping KEY_PREFIX.
Internally we no longer pre-fetch the realm objects for
the bots, but we don't get overly precise about picking
individual fields from UserProfile, since we rarely hit
the database and since we don't store raw ORM objects
in the cache.
The test diffs make it look like we are hitting the
cache an extra time, but the tests weren't counting
bulk fetches. Now we only use a single key for all
bots rather a key per bot.
The bulk_get_users() function was only being used to
get cross-realm bots.
It appears that it was introduced in
f02e5b90f6 for that
specific use case.
Now we make the function more specific and test it more
accurately.
We also eliminate a lot of janky code and comments,
including some code that never had test coverage.
Incidentally, it appears that we did not have any code
to invalidate the cache keys here, and that is still
the case. In practice I assume people rarely
re-configure their cross-realm bots unless they are
upgrading the server, and then KEY_PREFIX comes into
play. 25fd4c5508 seems
to have caused that hopefully harmless regression.
A further step will be to make this cache more coarse,
since there are only a few cross-realm bots. The next
commit will hopefully simplify the code and address the
validation pitfall.
Earlier the API endpoints related to user_group accepts and returns a
field `can_mention_group_id` which represents the ID
of user_group whose members can mention the group.
This commit renames this field to `can_mention_group`.
Earlier the API endpoints related to streams accepts and returns a
field `can_remove_subscribers_group_id` which represents the ID
of user_group whose members can remove subscribers from stream.
This commit renames this field to `can_remove_subscribers_group`.
An exception which escapes from this loop can kill the background
worker thread; this results in consuming the queue (leading to the
illusion of progress) but more and more rows silently piling up in the
ScheduledMessageNotificationEmail table.
Wrap the inside of the `while True` loop in a try/catch to make sure
that no exceptions escape and kill the background thread. To prevent
even more indentation, the inner loop is extracted into its own
function. It returns true/false to signal if the `self.stopping` was
set to tell the loop to stop; we cannot check it ourselves in the
outer loop because it needs to hold the lock to be examined.
Previously, the view function was responsible for doing a first pass of
the validations done for RealmPlayground. It is no longer true now. This
refactors do_add_realm_playground to check_add_realm_playground and make
it responsible for validating the playground fields and doing error
handling for the ValidationError raised.
Dropping support for url_prefix for RealmPlayground, the server now uses
url_template instead only for playground creation, retrieval and audit
logging upon removal.
This does the necessary handling so that url_template is expanded with
the extracted code.
Fixes#25723.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the stray strings used to refer to
various types of notification triggers.
We use the attributes of the 'NotificationTriggers' class instead.
We populate url_template by simply escaping "{" and "}" as well as
appending "{code}" to the end of the legacy url_prefix.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
As an intermediate step before we fully support url_template for realm
playgrounds, we populate url_template in the backend ensuring that all
the new entries will be validated. With a later backfilling migration,
we prepare the database such that all the records will have a valid URL
template.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Having a more precise type annotation helps with ensuring the migration
to use URL templates gets type checked.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit updates `0455_set_default_for_can_mention_group`
migration to be more efficient when running for a large number
of UserGroup objects.
Previously, we did a loop over all UserGroup objects and
then did a `bulk_update`. All this happened in a single
transaction and the transaction was being hold for
unacceptably long time for a server with large number
of user groups. Also the SQL generated by Django for
`bulk_update` took almost quadratic time to evaluate,
as the SQL had linear length "CASE" statement which was
being resolved for each row.
We instead now use ".update" so that we can write the migration
without using loop and update the objects in batches of size
1000 so that we do not hold a transaction for very long time.
This also helps in avoiding the inefficient SQL that was being
executed due to using `bulk_update`.
We also update the queries to exclude the groups that already
have `can_mention_group` set to a non-null value, as this will
help in migration completing quickly when running it more than
once.