232eb8b7cf changed how these pages work, to render inline instead of
serving from a URL, but did not update the SMTP use case; this made
SMTP failures redirect to a 404.
Two registration requests for the same email address can race,
leading to an IntegrityError when making the second user.
Catch this and redirect them to the login page for their existing
email.
The goal is to reduce load on Sentry if the service is timing out, and
to reduce uwsgi load from long requests. This circuit-breaker is
per-Django-process, so may require more than 2 failures overall before
it trips, and may also "partially" trip for some (but not all)
workers. Since all of this is best-effort, this is fine.
Because this is only for load reduction, we only circuit-breaker on
timeouts, and not unexpected HTTP response codes or the like.
See also #26229, which would move all browser-submitted Sentry
reporting into a single process, which would allow circuit-breaking to
be more effective.
This prevents failure to submit a client-side Sentry trace from
turning into a server-side client trace. If Sentry is down, we merely
log the error to our error logs and carry on.
When the `type` of the message being composed is "stream",
this commit updates the `to` parameter to accept the ID of
the stream in which the message is being typed.
Earlier, it accepted a single-element list containing the ID
of the stream.
Sending the element instead of a list containing the single element
makes more sense.
This is a prep commit that extracts the following two methods
from '/actions/scheduled_messages' to reuse in the next commit.
* extract_stream_id
* extract_direct_message_recipient_ids
The 'to' parameter for 'POST /typing' will follow the same pattern
in the next commit as we currently have for the 'to' parameter in
'POST /scheduled_messages', so we can reuse these functions.
This commit removes the compatibility support for "private"
being a valid value for the 'type' parameter in 'POST /typing'.
"direct" and "stream" are the only valid values.
This commit replaces the value `private` with `direct` in the
`message_type` field for the `typing` events sent when a user
starts or stops typing a message.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
Instead of having "business" as the default organization type
for demo organizations in the dev environment, we set it to
"unspecified". This way a more generic zulip guide email will
be sent as part of the onboarding process for users invited
to try out the demo organization if the owner has not yet
updated the organization type.
Making request a mandatory kwarg avoids confusion about the meaning of
parameters, especially with `request` acquiring the ability to be None
in the upcoming next commit.
Adds support for bulk-adjusting a single user's membership in multiple
user groups in a single transaction in the low-level actions
functions, for future use by work on #9957.
This commit adds a `jitsi_server_url` field to the Realm model, which
will be used to save the URL of the custom Jitsi Meet server. In
the database, `None` will encode the server-level default. We can't
readily use `None` in the API, as it could be confused with "field not
sent". Therefore, we will use the string "default" for this purpose.
We have also introduced `server_jitsi_server_url` in the `/register`
API. This will be used to display the server's default Jitsi server
URL in the settings UI.
The existing `jitsi_server_url` will now be calculated as
`realm_jitsi_server_url || server_jitsi_server_url`.
Fixes a part of #17914.
Co-authored-by: Gaurav Pandey <gauravguitarrocks@gmail.com>
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.
Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.
So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).
The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.
The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This algorithm existed in multiple places, with different queries.
Since we only access properties in the UserMessage table, we
standardize on the much simpler and faster Index Only Scan, rather
than a merge join.
This demonstrates the use of BaseModel to replace a check_dict_only
validator.
We also add support to referring to $defs in the OpenAPI tests. In the
future, we can descend down each object instead of mapping them to dict
for more accurate checks.
This demonstrates some basic use cases of the Json[...] wrapper with
@typed_endpoint.
Along with this change we extend test_openapi so that schema checking
based on function signatures will still work with this new decorator.
Pydantic's TypeAdapter supports dumping the JSON schema of any given type,
which is leveraged here to validate against our own OpenAPI definitions.
Parts of the implementation will be covered in later commits as we
migrate more functions to use @typed_endpoint.
See also:
https://docs.pydantic.dev/latest/api/type_adapter/#pydantic.type_adapter.TypeAdapter.json_schema
For the OpenAPI schema, we preprocess it mostly the same way. For the
parameter types though, we no longer need to use
get_standardized_argument_type to normalize type annotation, because
Pydantic dumps a JSON schema that is compliant with OpenAPI schema
already, which makes it a lot convenient for us to compare the types
with our OpenAPI definitions.
Do note that there are some exceptions where our definitions do not match
the generated one. For example, we use JSON to parse int and bool parameters,
but we don't mark them to use "application/json" in our definitions.
This is important because the "guests" value isn't one that we'd
expect anyone to pick intentionally, and in particular isn't an
available option for the similar/adjacent "email invitations" setting.
This commit rename the existing setting `Who can invite users to this
organization` to `Who can send email invitations to new users` and
also renames all the variables related to this setting that do not
require a change to the API.
This was done for better code readability as a new setting
`Who can create invite links` will be added in future commits.
This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
This commit adds id_field_name field to GroupPermissionSetting
type which will be used to store the string formed by concatenation
of setting_name and `_id`.
This was already enforced via separate logic that requires an owner to
invite an owner, but it makes the intent of the code a lot more clear
if we don't have this value mysteriously absent.
Earlier there was a function to check if owner is
required to create invitations for the role specified
in invite and check for administrator was done
without any function call.
This commit adds a new function to check whether
owner or administrator is required for creating
invitations for the specified role and
refactors the code to use that new function.
Rewrite the test so that we don't have a dedicated URL for testing.
dev_update_subgroups is called directly from the tests without using the
test client.
Creates process for demo organization owners to add an email address
and password to their account.
Uses the same flow as changing an email (via user settings) at the
beginning, but then sends a different email template to the user
for the email confirmation process.
We also encourage users to set their full name field in the modal for
adding an email in a demo organization. We disable the submit button
on the form if either input is empty, email or full name.
When the user clicks the 'confirm and set password' button in the
email sent to confirm the email address sent via the form, their
email is updated via confirm_email_change, but the user is redirected
to the reset password page for their account (instead of the page for
confirming an email change has happened).
Once the user successfully sets a password, then they will be
prompted to log in with their newly configured email and password.
Since an email address is not required to create a demo organization,
we need a Zulip API email address for the web-app to use until the
owner configures an email for their account.
Here, we set the owner's `email_address_visibility` to "Nobody" when
the owner's account is created so that the Zulip API email field in
their profile is a fake email address string.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
Updates the API error response when there is an unknown or
deactivated user in the `principals` parameter for either the
`/api/subscribe` or `/api/unsubscribe` endpoints. We now use
the `access_user_by_email` and `access_user_by_id` code paths,
which return an HTTP response of 400 and a "BAD_REQUEST" code.
Previously, an HTTP response of 403 was returned with a special
"UNAUTHORIZED_PRINCIPAL" code in the error response. This code
was not documented in the API documentation and is removed as
a potential JsonableError code with these changes.
Fixes#26593.
This PR implements the audio call feature for Zoom. This is done by explicitly
telling Zoom to create a meeting where the host's video and participants' video
are off by default.
Another key change is that when creating a video call, the host's and
participants' video will be on by default. The old code doesn't specify that
setting, so meetings actually start with video being off. This new behavior has
less work for users to do. They don't have to turn on video when joining a call
advertised as "video call". It still respects users' preferences because they
can still configure their own personal setting that overrides the meeting
defaults.
The Zoom API documentation can be found at
https://developers.zoom.us/docs/api/rest/reference/zoom-api/methods/#operation/meetingCreateFixes#26549.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
This adds API support to reorder linkifiers and makes sure that the
returned lists of linkifiers from `GET /events`, `POST /register`, and
`GET /realm/linkifiers` are always sorted with the order that they
should processed when rendering linkifiers.
We set the new `order` field to the ID with the migration. This
preserves the order of the existing linkifiers.
New linkifiers added will always be ordered the last. When reordering,
the `order` field of all linkifiers in the same realm is updated, in
a manner similar to how we implement ordering for
`custom_profile_fields`.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
In this commit, we introduce a new option in the stream creation
UI - a 'Default stream for new users' checkbox. By default, the
checkbox is set to 'off' and is only visible to admins. This
allow admins to easily designate a stream as the default stream
for new users during stream creation.
Fixes#24048.
This commit adds a 'Default stream for new users' checkbox in
the stream editing UI to allow admins to easily add or remove
a stream as the default stream for new users. Previously, this
functionality required navigating to separate menu.
Fixes a part of #24048.
This commit adds code to pass stream traffic data using
the "stream_weekly_traffic" field in stream objects.
We already include the traffic data in Subscription objects,
but the traffic data does not depend on the user to stream
relationship and is stream-only information, so it's better
to include it in Stream objects. We may remove the traffic
data and other stream information fields for Subscription
objects in future.
This will help clients to correctly display the stream
traffic data in case where client receives a stream
creation event and no subscription event, for an already
existing stream which the user did not have access to before.
This commit adds stream_to_dict method which is same as
Stream.to_dict method as of now. This is a prep commit
to include stream traffic data in stream objects.
Earlier while changing group level group based settings
there was no check if the new value for setting is same as
the current value.
This commit adds this check now a setting value will be only
changed when it is not equal to present value.
Earlier the API endpoints related to user_group accepts and returns a
field `can_mention_group_id` which represents the ID
of user_group whose members can mention the group.
This commit renames this field to `can_mention_group`.
Earlier the API endpoints related to streams accepts and returns a
field `can_remove_subscribers_group_id` which represents the ID
of user_group whose members can remove subscribers from stream.
This commit renames this field to `can_remove_subscribers_group`.
Previously, the view function was responsible for doing a first pass of
the validations done for RealmPlayground. It is no longer true now. This
refactors do_add_realm_playground to check_add_realm_playground and make
it responsible for validating the playground fields and doing error
handling for the ValidationError raised.
Dropping support for url_prefix for RealmPlayground, the server now uses
url_template instead only for playground creation, retrieval and audit
logging upon removal.
This does the necessary handling so that url_template is expanded with
the extracted code.
Fixes#25723.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Updates the realm field `default_code_block_language` to have a default
value of an empty string instead of None. Also updates the web-app to
check for the empty string and not `null` to indicate no default is set.
This means that both new realms and existing realms that have no default
set will have the same value for this setting: an empty string.
Previously, new realms would have None if no default was set, while realms
that had set and then unset a value for this field would have an empty
string when no default was set.
We used to access the complete objects for UserProfile foreign
keys like "bot_owner" and "default_sending_stream", where we only
needed ID of them.
This commit fixes some of such instances and now we directly get
the id using "bot_owner_id" and "default_sending_stream_id" so
that we can avoid the unnecessary complexity of accessing the
complete object.
This commit updates the select_related calls in queries to
get UserProfile objects in dev_login code to pass "realm"
as argument to select_related call.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
This helps reduce the impact on busy uwsgi processes in case there are
slow timeout failures of Sentry servers. The p99 is less than 300ms,
and p99.9 per day peaks at around 1s, so this will not affect more
than .1% of requests in normal operation.
This is not a complete solution (see #26229); it is merely stop-gap
mitigation.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
It replaces the "File not found." text with:
"This file does not exist or has been deleted."
At present when a file is deleted it results in a confusing
experience when looking at the "File not found." message.
In order to clarify the situation is not a bug, the message
has been replaced with a better alternative.
Fixes part of Issue #23739.
We now upstream the conversion of legacy tuples
into the callers of do_events_register. For the
codepath that builds the home view, this allows
for cleaner code in the caller. For the /register
endpoint, we have to do the conversion, but that
isn't super ugly, as that's an appropriate place
to deal with legacy formats and clean them up.
We do have to have do_events_register downgrade
the format back to tuples to pass them into
request_event_queue, because I don't want to
change any serialization formats. The conversion
is quite simple, and it has test coverage.
This commit adds default_group_name field to GroupPermissionSetting
type which will be used to store the name of the default group for
that setting which would in most cases be one of the role-based
system groups. This will be helpful when we would have multiple
settings and we would need to set the defaults while creating
realm and streams.