There is now no longer any reason to have the scheduled_email
enqueuing wait until all of the users' contexts have been generated.
Switch to returning the contexts as an iterator, and send them as we
compute them.
The query plan for fetching recent messages from the arbitrary set of
streams formed by the intersection of 30 random users can be quite
bad, and can descend into a sequential scan on `zerver_recipient`.
Worse, this work of pulling recent messages out is redone if the
stream appears in the next batch of 30 users.
Instead, pull the recent messages for a stream on a one-by-one basis,
but cache them in an in-memory cache. Since digests are enqueued in
30-user batches but still one-realm-at-a-time, work will be saved both
in terms of faster query plans whose results can also be reused across
batches.
This requires that we pull the stream-id to stream-name mapping for
_all_ streams in the realm at once, but that is well-indexed and
unlikely to cause performance issues -- in fact, it may be faster
than pulling a random subset of the streams in the realm.
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is designed to help PostgreSQL have better specificity and
locality in its indexes. Subsequent commits will adjust the code to
make sure that we use these indexes rather than the `realm_id`-less
versions.
We do not add a `realm_id` variation to the full-text index, since
it is a GIN index; multi-column GIN indexes are not terribly
performant, require the `btree_gin` extension for `int` types (which
requires superuser privileges on PostgreSQL 12 and earlier), and
cannot be consistently added concurrently on running instances.
After all indexes have been made, we also run `CREATE STATISTICS` in
order to give PostgreSQL the opportunity to realize that recipient and
sender are highly correlated with message realm, allowing it to
estimate that `(realm_id, recipient_id)` is likely as specific as
matching a given `recipient_id`, instead of as likely as matching
`realm_id` times matching a `recipient_id`. Finally, those statistics
must be filled by `ANALYZE zerver_message`, which is run last.
Matching the topic exactly, as opposed to case-insensitively, is not a
common operation, and one that we want to make difficult to do
accidentally. Inline the single use case of it.
We now have a `realm_id` on Message; use it, rather than having to
check the sender's realm. This is theoretically different for
cross-realm bots, but these changes are all in tests where that does
not apply.
This algorithm existed in multiple places, with different queries.
Since we only access properties in the UserMessage table, we
standardize on the much simpler and faster Index Only Scan, rather
than a merge join.
When searching for links inside a topic name, the question mark (?)
was used to split the topic. If a URL had a query after the URL
(e.g., "?foo=bar"), then the query was trimmed from the URL.
Removing the question mark from `basic_link_splitter` is sufficient
to fix this issue. The `get_web_link_regex` function then removes
the trailing punctuation if any, including literal question marks.
Fixes#26368.
When there was no space right after `/todo` but there was content on a
new line, the message would be rendered plainly, not as a todo widget.
This was because we split on only the space character to then check if
the first token was a valid widget.
Now we split on both spaces and newlines to extract the widget name,
irrespective of whether it is followed by a space or a newline. This
results in the message being rendered as a todo widget as expected.
Rename existing shortened references to demo organizations, like
`is_demo_org` or `demo-org-warning`, that have been used in the
codebase so far and replace them to be like the `models.py`
variable: `Realm.demo_organization_scheduled_deletion_date`.
This REDOS was not exploitable, as its content is only read from
checked-in files; regardless, simplify it to not backtrack. We also
do not actually have any location which use leading or trailing
whitespace, so remove those optional bits.
Our logic for extracting strings from templates did not properly
handle the syntax for code containing whitespace control characters,
resulting in a couple strings from subscribe_to_more_streams.hbs not
being processed.
The Librato webhook requires a mapping (which should be considered
immutable) with a default value. Ruff reports a false-positive due to
the Json wrapper.
Instead of a WildValue, the JSON/Sentry webhook expect the request body to be a
dict.
For the JSON webhook, json.dumps accepts other types of input as well and the
constraint is not necessary, but this serve as a good example of an alternative
use of WebhookPayload to describe a payload that is intended to be parsed from
the entire request body from JSON, into a type other than WildValue.
Transifex has parameters that need to be parsed from JSON and converted
to int. Note that we use Optional[Json[int]] instead of
Json[Optional[int]] to replicate the behavior of json_validator. This
caveat is explained in a new test called test_json_optional.
These webhooks have some URL query params that do not need additional
validation or parsing from JSON. So WebhookPaylaod is not applicable to
these webhooks.
This converts most webhook integration views to use @typed_endpoint instead
of @has_request_variables, rewriting REQ parameters. For these
webhooks, it simply requires switching the decorator, rewriting the
type annotation of payload/message to WebhookPayload[WildValue], and
removing the REQ default that defines the to_wild_value converter.
This function is used by almost all webhooks.
To support it, we use the "api_ignore_parameter" flag so that positional
arguments like topic and body that are not intended to be parsed from
the request can be ignored.
This demonstrates the use of BaseModel to replace a check_dict_only
validator.
We also add support to referring to $defs in the OpenAPI tests. In the
future, we can descend down each object instead of mapping them to dict
for more accurate checks.
This demonstrates some basic use cases of the Json[...] wrapper with
@typed_endpoint.
Along with this change we extend test_openapi so that schema checking
based on function signatures will still work with this new decorator.
Pydantic's TypeAdapter supports dumping the JSON schema of any given type,
which is leveraged here to validate against our own OpenAPI definitions.
Parts of the implementation will be covered in later commits as we
migrate more functions to use @typed_endpoint.
See also:
https://docs.pydantic.dev/latest/api/type_adapter/#pydantic.type_adapter.TypeAdapter.json_schema
For the OpenAPI schema, we preprocess it mostly the same way. For the
parameter types though, we no longer need to use
get_standardized_argument_type to normalize type annotation, because
Pydantic dumps a JSON schema that is compliant with OpenAPI schema
already, which makes it a lot convenient for us to compare the types
with our OpenAPI definitions.
Do note that there are some exceptions where our definitions do not match
the generated one. For example, we use JSON to parse int and bool parameters,
but we don't mark them to use "application/json" in our definitions.
We want to reject ambiguous type annotations that set ApiParamConfig
inside a Union. If a parameter is Optional and has a default of None, we
prefer Annotated[Optional[T], ...] over Optional[Annotated[T, ...]].
This implements a check that detects Optional[Annotated[T, ...]] and
raise an assertion error if ApiParamConfig is in the annotation. It also
checks if the type annotation contains any ApiParamConfig objects that
are ignored, which can happen if the Annotated type is nested inside
another type like List, Union, etc.
Note that because
param: Annotated[Optional[T], ...] = None
and
param: Optional[Annotated[Optional[T], ...]] = None
are equivalent in runtime prior to Python 3.11, there is no way for us
to distinguish the two. So we cannot detect that in runtime.
See also: https://github.com/python/cpython/issues/90353
The goal of typed_endpoint is to replicate most features supported by
has_request_variables, and to improve on top of it. There are some
unresolved issues that we don't plan to work on currently. For example,
typed_endpoint does not support ignored_parameters_supported for 400
responses, and it does not run validators on path-only arguments.
Unlike has_request_variables, typed_endpoint supports error handling by
processing validation errors from Pydantic.
Most features supported by has_request_variables are supported by
typed_endpoint in various ways.
To define a function, use a syntax like this with Annotated if there is
any metadata you want to associate with a parameter, do note that
parameters that are not keyword-only are ignored from the request:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: Annotated[int, ApiParamConfig(path_only=True)],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=NTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are also some shorthands for the commonly used annotated types,
which are encouraged when applicable for better readability and less
typing:
```
WebhookPayload = Annotated[Json[T], ApiParamConfig(argument_type_is_body=True)]
PathOnly = Annotated[T, ApiParamConfig(path_only=True)]
```
Then the view function above can be rewritten as:
```
@typed_endpoint
def view(
request: HttpRequest,
user_profile: UserProfile,
*,
foo: PathOnly[int],
bar: Json[int],
other: Annotated[
Json[int],
ApiParamConfig(
whence="lorem",
documentation_status=INTENTIONALLY_UNDOCUMENTED
)
] = 10,
) -> HttpResponse:
....
```
There are some intentional restrictions:
- A single parameter cannot have more than one ApiParamConfig
- Path-only parameters cannot have default values
- argument_type_is_body is incompatible with whence
- Arguments of name "request", "user_profile", "args", and "kwargs" and
etc. are ignored by typed_endpoint.
- positional-only arguments are not supported by typed_endpoint. Only
keyword-only parameters are expected to be parsed from the request.
- Pydantic's strict mode is always enabled, because we don't want to
coerce input parsed from JSON into other types unnecessarily.
- Using strict mode all the time also means that we should always use
Json[int] instead of int, because it is only possible for the request
to have data of type str, and a type annotation of int will always
reject such data.
typed_endpoint's handling of ignored_parameters_unsupported is mostly
identical to that of has_request_variables.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Commit cf0eb46afc added this to let
Django understand the CREATE INDEX CONCURRENTLY statement that had
been hidden in a RunSQL query in migration 0244. However, migration
0245 explained that same index to Django in a different way by setting
db_index=True. Move that to 0244 where the index is actually created,
using SeparateDatabaseAndState.
Also remove the part of the SQL in 0245 that was mirrored by dummy
state_operations, and replace it with real operations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is important because the "guests" value isn't one that we'd
expect anyone to pick intentionally, and in particular isn't an
available option for the similar/adjacent "email invitations" setting.
Earlier whenever a new invitation is created a event was sent
to only admin users. So, if invites by a non-admins user are changed
the invite panel does not live update.
This commit makes changes to also send event to non-admin
user if invites by them are changed.
This commit rename the existing setting `Who can invite users to this
organization` to `Who can send email invitations to new users` and
also renames all the variables related to this setting that do not
require a change to the API.
This was done for better code readability as a new setting
`Who can create invite links` will be added in future commits.
This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
This commit adds id_field_name field to GroupPermissionSetting
type which will be used to store the string formed by concatenation
of setting_name and `_id`.
This was already enforced via separate logic that requires an owner to
invite an owner, but it makes the intent of the code a lot more clear
if we don't have this value mysteriously absent.
Earlier there was a function to check if owner is
required to create invitations for the role specified
in invite and check for administrator was done
without any function call.
This commit adds a new function to check whether
owner or administrator is required for creating
invitations for the specified role and
refactors the code to use that new function.
This commit makes the database changes while creating internal_realm
to be done in a single transaction.
This is needed for deferring the foreign key constraints
to the end of transaction.
Previously (with ERROR_REPORTING = True), we’d stuff the entire
traceback of the initial exception into the subject line of an error
email, and then also send a separate email for the JSON 500 response.
Instead, log one error with the standard Django format.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Rewrite the test so that we don't have a dedicated URL for testing.
dev_update_subgroups is called directly from the tests without using the
test client.
'test_get_message_payload_gcm_stream_message' verifies the payload
for notifications generated (for stream messages) due to any of the
push notification triggers, including
'NotificationTriggers.STREAM_PUSH'.
Earlier, 'test_get_message_payload_gcm_stream_notifications' tested
the same thing as 'test_get_message_payload_gcm_stream_message' with
the only difference that it included content that was not truncated.
This commit removes the test
'test_get_message_payload_gcm_stream_notifications' and updates
the test 'test_get_message_payload_gcm_stream_message' to cover
both the cases, i.e., truncated as well as not truncated content.
This commit removes the 'alert' field from the payload for
Android via GCM/FCM.
The alert strings generated do not get used at all and have
not been used since at least 2019. On Android, we construct
the notification UI ourselves in the client, and we ignore
the alert string.
Creates process for demo organization owners to add an email address
and password to their account.
Uses the same flow as changing an email (via user settings) at the
beginning, but then sends a different email template to the user
for the email confirmation process.
We also encourage users to set their full name field in the modal for
adding an email in a demo organization. We disable the submit button
on the form if either input is empty, email or full name.
When the user clicks the 'confirm and set password' button in the
email sent to confirm the email address sent via the form, their
email is updated via confirm_email_change, but the user is redirected
to the reset password page for their account (instead of the page for
confirming an email change has happened).
Once the user successfully sets a password, then they will be
prompted to log in with their newly configured email and password.
Since an email address is not required to create a demo organization,
we need a Zulip API email address for the web-app to use until the
owner configures an email for their account.
Here, we set the owner's `email_address_visibility` to "Nobody" when
the owner's account is created so that the Zulip API email field in
their profile is a fake email address string.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
Updates the API error response when there is an unknown or
deactivated user in the `principals` parameter for either the
`/api/subscribe` or `/api/unsubscribe` endpoints. We now use
the `access_user_by_email` and `access_user_by_id` code paths,
which return an HTTP response of 400 and a "BAD_REQUEST" code.
Previously, an HTTP response of 403 was returned with a special
"UNAUTHORIZED_PRINCIPAL" code in the error response. This code
was not documented in the API documentation and is removed as
a potential JsonableError code with these changes.
Fixes#26593.
Updates API changelog entries for feature level 205 for minor
revisions and the addition of help center links. Also, revises
the Changes notes for the stream creation and deletion events
for the same feature level.
The 'startinline' option is utilized in the `CodeHilite` instantiation
to indicate that the provided PHP code snippet should be highlighted
even if it doesn't start with the opening tag or marker of the
associated programming language (which will rarely be the case in
Zulip, since one discusses a section of a file much more often than a
whole file).
Fixes: https://chat.zulip.org/#narrow/stream/137-feedback/topic/php.20syntax.20highlighting.20should.20not.20require.20.60.3C.3Fphp.60.
Signed-off-by: Akshat <akshat25iiit@gmail.com>
This commit adds a test to verify the payload
'get_message_payload_apns' returns when the notification trigger is
'NotificationTriggers.FOLLOWED_TOPIC_PUSH'.
This commit updates the 'get_apns_alert_subtitle' function to
return a common subtitle, i.e., "{full_name} mentioned everyone:"
for wildcard mentions.
The triggers for the stream or topic wildcard mentions include:
* NotificationTriggers.TOPIC_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.STREAM_WILDCARD_MENTION_IN_FOLLOWED_TOPIC
* NotificationTriggers.TOPIC_WILDCARD_MENTION
* NotificationTriggers.STREAM_WILDCARD_MENTION
This PR implements the audio call feature for Zoom. This is done by explicitly
telling Zoom to create a meeting where the host's video and participants' video
are off by default.
Another key change is that when creating a video call, the host's and
participants' video will be on by default. The old code doesn't specify that
setting, so meetings actually start with video being off. This new behavior has
less work for users to do. They don't have to turn on video when joining a call
advertised as "video call". It still respects users' preferences because they
can still configure their own personal setting that overrides the meeting
defaults.
The Zoom API documentation can be found at
https://developers.zoom.us/docs/api/rest/reference/zoom-api/methods/#operation/meetingCreateFixes#26549.
Adds a Changes note for when `other_user_id` was added to the `pms`
object.
Changes a few uses of "you" to be "current user" instead.
Clarifies type of direct message (one-on-one or group) and that
messages are unread messages.
Fixes the field in both the pms and huddles objects to be correctly
documented as `unread_message_ids`, instead of `message_ids`.
The documentation of the similar field in the stream object of
`unread_msgs` was corrected in commit 27ddb554fb.
We now send stream creation and stream deletion events on
changing a user's role because a user can gain or lose
access to some streams on changing their role.
This commit extracts the code which queries the required streams
to a new function "get_user_streams". The new functions returns
the list of "Stream" object and not dictionaries and then
do_get_streams function converts it into list of dictionaries.
This change is important because we would use the new function
in further commit where we want list of "Stream" objects and
not list of dictionaries.
There was a bug in apply_event code where only a stream which
is not private is added to the "never_subscribed" data after
a stream creation event. Instead, it should be added to the
"never_subscribed" data irrespective of permission policy of
the stream as we already send stream creation events only to
those users who can access the stream. Due to the current
bug, private streams were not being added to "never_subscribed"
data in apply_event for admins as well. This commit fixes it
and also makes sure the "never_subscribed" list is sorted
which was not done before and was also a bug.
The bugs mentioned above were unnoticed as the tests did not
cover these cases and this commit also adds tests for those
cases.
The "streams" field in "/register" response did not include web-public
streams for non-admin users but the data for those are eventually
included in the subscriptions data sent using "subscriptions",
"unsubscribed" and "never_subscribed" fields.
This commit adds code to include the web-public streams in "streams"
field as well as everyone can access those and will make the "streams"
data complete.
The name and docstring were just wrong, having a UserMessage row isn't
sufficient for having message access and is actually only relevant in a
private stream with private history. The function is only used in a
single place anyway, in bulk_access_messages.
The comment mentioning this function in handle_remove_push_notification
can be tweaked to just not mention any function specifically and just
say why we're not checking message access.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
Updates the main description of the `api/set-typing-status` endpoint
for the new fields in the register response for the typing start,
stop, expired time intervals. Previously these were hardcoded by
the client side code and not the server side code.
Also updates the developer documentation for typing indicators in
the subsystems docs. This refreshes a few parts of that doc that
were already out of date, as well as adds the information about
the new register response fields noted above.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Fixes#11767.
Previously multi-character emoji sequences weren't matched in the
emoji regex, so we'd convert the characters to separate images,
breaking the intended display.
This change allows us to match the full emoji sequence, and
therefore show the correct image.
We have modified the code to directly fetch realm from Message
object instead of "sender" field and thus we no longer need to
fetch "sender__realm" using select_related.
There is no need to get realm for sender as ScheduledMessage
object also has realm field.
There is no direct benefit of this change but it is nice to
maintain the pattern which we want to follow in the code
in tests as well.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We do not set realm to Message objects defined for markdown tests
and this works because we currently access realm from sender object.
This commit changes the code to set realm in Message objects as
we would be accessing realm from Message object directly in further
commits.
Previously, if a user tried to create a webhook using the Webhooks
plugin in Sentry and used the "Test plugin" to test the webhook,
the server would send a 500 error, even though the integration
worked perfectly. This led users to believe that the integration
was not working.
Fixes#26173.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
Almost all users of JsonSuccessBase seem to also include
SuccessDescription. /server_settings used a different description from
the rest of the JsonSuccessBase users, but the difference is small
enough that using the generic description of the former
SuccessDescription is fine.
Migrates existing ScheduledEmails for onboarding emails that have
either "zerver/emails/followup_day1" or "zerver/emails/followup_day2"
as the email template prefix to instead use the new template
prefixes "zerver/emails/account_registered" and
"zerver/emails/zulip_onboarding_topics".
Now that we're using the new templates for the onboarding emails,
remove "followup_day1" and "followup_day2" from the EMAIL_TYPES
that are used for scheduled emails.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
In servers with `application_server.http_only = true` and
`loadbalancer.ips` set, the DetectProxyMisconfiguration middleware
prevents access over HTTP from IP addresses other than the
loadbalancer.
However, this misses the case of access from localhost over HTTP,
which is safe and expected -- for instance, the `email-mirror-postfix`
script used in the email gateway[^1] will post to `http://localhost/`
by default in such configurations. With the
DetectProxyMisconfiguration installed, this will result in a 403
response.
Make an exception for requests from `127.0.0.1` and `::1` from
proxy-misconfiguration rejections.
[^1]: https://zulip.readthedocs.io/en/latest/production/email-gateway.html
Removes a response example in the `POST users/me/subscriptions`
documentation that was listed as a 400 error response. It is
actually a variation on the success response for this endpoint.
The current rendering of our API documentation is not set up to
support `"anyOf"` which would allow for validating examples that
match multiple response schemas.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the private stream suscriptions of the bot if the
original owner is deactivated and we change the owner to the user who
is reactivating the bot. We unsusbcribe the bot from private streams
that the new owner is not subscribed to.
Fixes part of #21700.
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
Fixes the `/api/register-queue` endpoint documentation so that the
`realm_emoji` has the correct type, object that contains objects.
By correcting the API documentation, we also fix an error in the
test for the events system, which had been relying on the API
documentation having a list as a possible type for `realm_emoji`
in the register response.
Earlier, for topic wildcard mentions, the 'wildcard_mentioned'
flag was set for all the user-messages. (similar to stream wildcard
mention).
The flag should be set for the topic participants only.
The bug was introduced in 4c9d26c.