We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
Updates the main description of the `api/set-typing-status` endpoint
for the new fields in the register response for the typing start,
stop, expired time intervals. Previously these were hardcoded by
the client side code and not the server side code.
Also updates the developer documentation for typing indicators in
the subsystems docs. This refreshes a few parts of that doc that
were already out of date, as well as adds the information about
the new register response fields noted above.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Fixes#11767.
Previously multi-character emoji sequences weren't matched in the
emoji regex, so we'd convert the characters to separate images,
breaking the intended display.
This change allows us to match the full emoji sequence, and
therefore show the correct image.
We have modified the code to directly fetch realm from Message
object instead of "sender" field and thus we no longer need to
fetch "sender__realm" using select_related.
There is no need to get realm for sender as ScheduledMessage
object also has realm field.
There is no direct benefit of this change but it is nice to
maintain the pattern which we want to follow in the code
in tests as well.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We do not set realm to Message objects defined for markdown tests
and this works because we currently access realm from sender object.
This commit changes the code to set realm in Message objects as
we would be accessing realm from Message object directly in further
commits.
Previously, if a user tried to create a webhook using the Webhooks
plugin in Sentry and used the "Test plugin" to test the webhook,
the server would send a 500 error, even though the integration
worked perfectly. This led users to believe that the integration
was not working.
Fixes#26173.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
Almost all users of JsonSuccessBase seem to also include
SuccessDescription. /server_settings used a different description from
the rest of the JsonSuccessBase users, but the difference is small
enough that using the generic description of the former
SuccessDescription is fine.
Migrates existing ScheduledEmails for onboarding emails that have
either "zerver/emails/followup_day1" or "zerver/emails/followup_day2"
as the email template prefix to instead use the new template
prefixes "zerver/emails/account_registered" and
"zerver/emails/zulip_onboarding_topics".
Now that we're using the new templates for the onboarding emails,
remove "followup_day1" and "followup_day2" from the EMAIL_TYPES
that are used for scheduled emails.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
In servers with `application_server.http_only = true` and
`loadbalancer.ips` set, the DetectProxyMisconfiguration middleware
prevents access over HTTP from IP addresses other than the
loadbalancer.
However, this misses the case of access from localhost over HTTP,
which is safe and expected -- for instance, the `email-mirror-postfix`
script used in the email gateway[^1] will post to `http://localhost/`
by default in such configurations. With the
DetectProxyMisconfiguration installed, this will result in a 403
response.
Make an exception for requests from `127.0.0.1` and `::1` from
proxy-misconfiguration rejections.
[^1]: https://zulip.readthedocs.io/en/latest/production/email-gateway.html
Removes a response example in the `POST users/me/subscriptions`
documentation that was listed as a 400 error response. It is
actually a variation on the success response for this endpoint.
The current rendering of our API documentation is not set up to
support `"anyOf"` which would allow for validating examples that
match multiple response schemas.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the private stream suscriptions of the bot if the
original owner is deactivated and we change the owner to the user who
is reactivating the bot. We unsusbcribe the bot from private streams
that the new owner is not subscribed to.
Fixes part of #21700.
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
Fixes the `/api/register-queue` endpoint documentation so that the
`realm_emoji` has the correct type, object that contains objects.
By correcting the API documentation, we also fix an error in the
test for the events system, which had been relying on the API
documentation having a list as a possible type for `realm_emoji`
in the register response.
Earlier, for topic wildcard mentions, the 'wildcard_mentioned'
flag was set for all the user-messages. (similar to stream wildcard
mention).
The flag should be set for the topic participants only.
The bug was introduced in 4c9d26c.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
Currently, we are displaying the "Complete the organization profile"
banner immediately after the organization was created. It's important to
strongly encourage orgs to configure their profile, so we should delay
showing the banner if the profile has not been configured after 15 days.
Thus also allows the users to check out Zulip and see how it works before
configuring the organization settings.
Fixes: #24122.
Previoulsy, test_openapi_arguments had assumed that an endpoint
not using rest_dispatch used the GET method for the request. This
was not the case for the "/fetch_api_key" and "/dev_fetch_api_key"
endpoints, which is why those endpoints were marked as pending
even though they were documented in `zerver/openapi/zulip.yaml`.
Updates test_openapi_arguments to check a set of endpoints that
are documented and don't use the GET method so that these endpoints
can be tested and removed from the pending_endpoints set.
This adds API support to reorder linkifiers and makes sure that the
returned lists of linkifiers from `GET /events`, `POST /register`, and
`GET /realm/linkifiers` are always sorted with the order that they
should processed when rendering linkifiers.
We set the new `order` field to the ID with the migration. This
preserves the order of the existing linkifiers.
New linkifiers added will always be ordered the last. When reordering,
the `order` field of all linkifiers in the same realm is updated, in
a manner similar to how we implement ordering for
`custom_profile_fields`.
The curl examples of reordering linkifiers require there to be some
linkifiers in the database to be reordered. This adjusts some test cases
so they do not assume that there is no linkifier in the test db.
Each unittest subTest can fail without interrupting the other subTests.
By wrapping the test for each view function, we can get all validation
errors at once, which can be useful if multiple endpoints are updated.
More importantly, if the test fails anywhere inside test_openapi but
before the formatted output is printed, we will not lose the information
of which view function fails the validation. Because we attach the name
of the function to the subTest:
```
FAIL: test_openapi_arguments (zerver.tests.test_openapi.OpenAPIArgumentsTest) [zerver.views.alert_words.add_alert_words]
```
The number of affected objects may be quite high, and they are
selected by `id IN (...)` query, and updated with a giant `CASE`.
This turns out to be quadratic, and can cause large queries to take
hours, in a state where they cannot be terminated, when PostgreSQL >11
tries to JIT the query.
Set a batch_size as a stopgap performance fix before moving to
`.update()` as a real fix.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
We want to phase out the use of get_display_recipient
for streams, and this is the last place that I
eliminate it. The next commit will eliminate the
dead code and make mypy types tighter.
This change will make push notifications slightly
slower in some situations, but we avoid all the
complexity of a cache, and this code tends to run
offline.
We could always make this code a bit more efficient
by being a little smarter about what data we fetch
up front. For example, get_apns_alert_title gets
called by a function that already has the stream
name. It's just a bit of a pain to refactor when
you have all the DM codepath mucked up with the
stream codepath.
We generally want to avoid extra moving parts when we
stringify objects. We also want to phase out the use
of get_display_recipient for streams.
Note that we still hit get_display_recipient to
stringify DM and huddle objects, and it's kind of ugly
how we do it, but that's outside the scope of my
current PR.
There's no need for the complexity and extra round
trips to call get_display_recipient in a testing
context.
We also eliminate the unnecessary call to check_string.
This function is poorly named, but that's a sweep
for another day.
The get_display_recipient helper is a clumsy way to get
stream names, and it's not even representative of how
most of our code retrieves stream names.
The new helper also double-checks that the Stream
object has the correct recipient id.
We no longer have to reason about the 12 possible
ways of invoking get_narrow_url. We also avoid
double computation in a couple places.
Finally, we get stricter type checks by just inlining
the calls.
We don't need to call get_display_recipient for
non-stream messages.
I will rename display_recipient in the next commit;
if I were to combine the steps the diff would be too
hard to read.
This commit renames the keyword 'pm' to 'dm' in the
'pm_mention_email_disabled_user_ids' and
'pm_mention_push_disabled_user_ids' attributes of the
'RecipientInfoResult' dataclass.
'pm' and 'dm' are the acronyms for 'private message' and
'direct message' respectively.
It includes 'TODO/compatibility' code to support the old format
fields in the tornado queues during the Zulip server upgrades.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.
This commit adds 'TODO/compatibility' code to support the
old notification trigger values in the push notification queue
during the Zulip server upgrades.
In f4fa82e, we renamed the following notification triggers:
* 'wildcard_mentioned' to 'stream_wildcard_mentioned'
* 'followed_topic_wildcard_mentioned' to
'stream_wildcard_mentioned_in_followed_topic'.
This should have been added in f4fa82e.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Message objects such that only the
required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for Subscription objects such that only
the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit adds code to pass all the required arguments to
select_related call for MissedMessageEmailAddress such that
only the required related fields are fetched from the database.
Previously, we did not pass any arguments to select_related,
so all the directly and indirectly related fields were fetched
when many of them were actually not being used and made the
query unnecessarily complex.
This commit updates the code to pass "realm" and "recipient" as
arguments to select_related call in get_stream_by_id_in_realm.
Previously, since there was no arguments, it fetched
can_remove_subscribers_group and the related fields of
"Realm" model as well which were not being used, but
did not fetch "recipient" as it is a nullable field.
We only need ID of the recipient and not the full object, so we
directly access ID using "stream.recipient_id" instead of using
the complete recipient object.
This commit removes get_huddle_recipient function and we now use
get_or_create_huddle in get_recipient_from_user_profiles.
As a result of this change, we do not fetch the recipient from
Huddle object but instead get it using the "id" and "recipient_id"
fields available from Huddle object like we do for a personal
message. This change allows us to not fetch recipient object
using select_related when querying the Huddle object.
We now fetch recipient object when querying "Huddle" object in
get_or_create_huddle_backend as this query is eventually used
to get the recipient object only in get_huddle_recipient.
This commit also updates the select_related call in the code to
populate Huddle objects in cache to pass "Recipient" as argument.
Previously no argument was passed to select_related and thus no
related objects were being fetched, with no non-null related fields
being present.
- Adds instructions block and relative link to Starred messages.
- Adds "Toggle starred messages counter" subheading.
- Adds "Searching for messages" as a related article.
Makes a few updates to the text to match current API documentation
styles.
Updates the endpoint example to have accurate stream objects that
are returned in the response.
Fixes formatting of link in feature level 199 changeog entry.
Updates stream object examples for the `stream_weekly_traffic`
field added in feature level 199.
This allows us to not have to keep extending the tool for every
one-off use case and set of users; we build a pipeline to generate the
appropriate JSON file, write a template which uses the data it
provides, and run the tool with them together.
The set of objects in the `users` object can be very large (in some
cases, literally every object in the database) and making them into a
giant `id in (...)` to handle the one tiny corner case which we never
use is silly.
Switch the `--users` codepath to returning a QuerySet as well, so it
can be composed. We pass a QuerySet into send_custom_email as well,
so it can ensure that the realm is `select_related` in as well, no
matter how the QuerySet was generated.
Substituting the rendered body via Jinja2 means that it cannot
perform any interpolation itself. While the string replacement is
hacky, it is the only solution which avoids running Jinja2 more than
once, and also allows the user-supplied content to have Jinja2
substitutions in it.
In this commit, we introduce a new option in the stream creation
UI - a 'Default stream for new users' checkbox. By default, the
checkbox is set to 'off' and is only visible to admins. This
allow admins to easily designate a stream as the default stream
for new users during stream creation.
Fixes#24048.
This commit adds a 'Default stream for new users' checkbox in
the stream editing UI to allow admins to easily add or remove
a stream as the default stream for new users. Previously, this
functionality required navigating to separate menu.
Fixes a part of #24048.
Adds a test for when a value for a user's custom profile field is
removed and not set to a new value. The omission of this event in
the tests was noted as a possibility in #22103, which updated the
API documentation for these events having `null` for the field
value.
When adding the test discovered that the events logic was not
deleting the field from the user object and instead setting it to
`None`, so fixes that logic as well. There was a similar bug fixed
in commit 96c61a1a41 for when custom profile fields are removed
from a realm.
When applying realm_user update events, some of the event fields
for the person object were being updated to the same value in a
loop. Unnests those calls from the loop over the existing fields
so that they are only updated once.
The original nesting was introduced in commit 649fccde6b and
was expanded in other additions to the logic for these events.
This commit explicitly sets the following user settings:
* 'enable_followed_topic_email_notifications'
* 'enable_followed_topic_push_notifications'
to True.
Collectively, this improves the readability of the test and
the following two tests.
In 'test_copy_default_settings_from_another_user', we verify that
'cordelia' and 'iago' have the same values for their user settings,
but 'hamlet' has the defaults.
Earlier, we explicitly set the 'color_scheme' setting for 'hamlet' as
'UserProfile.COLOR_SCHEME_NIGHT', which is not needed.
As we verify, 'hamlet' should have the defaults.
So just verifying if the 'color_scheme' setting for 'hamlet' is
'UserProfile.COLOR_SCHEME_AUTOMATIC' (default) fulfils our purpose.
The extra line of code was introduced in b10f156.
This commit adds code to pass stream traffic data using
the "stream_weekly_traffic" field in stream objects.
We already include the traffic data in Subscription objects,
but the traffic data does not depend on the user to stream
relationship and is stream-only information, so it's better
to include it in Stream objects. We may remove the traffic
data and other stream information fields for Subscription
objects in future.
This will help clients to correctly display the stream
traffic data in case where client receives a stream
creation event and no subscription event, for an already
existing stream which the user did not have access to before.
This commit improves the description for stream_weekly_traffic
field in API documentation to make it clear to the readers about
how to interpret the value.
This commit changes the code to not use get_client_data
function and instead use `stream_to_dict` function to
get the stream data in a dictionary form. This is a
prep commit add stream traffic data to Stream objects.
This commit adds stream_to_dict method which is same as
Stream.to_dict method as of now. This is a prep commit
to include stream traffic data in stream objects.
Earlier while changing group level group based settings
there was no check if the new value for setting is same as
the current value.
This commit adds this check now a setting value will be only
changed when it is not equal to present value.
Previously, this code:
```python3
old_archived_attachments = ArchivedAttachment.objects.annotate(
has_other_messages=Exists(
Attachment.objects.filter(id=OuterRef("id"))
.exclude(messages=None)
.exclude(scheduled_messages=None)
)
).filter(messages=None, create_time__lt=delta_weeks_ago, has_other_messages=False)
```
...protected from removal any ArchivedAttachment objects where there
was an Attachment which had _both_ a message _and_ a scheduled
message, instead of _either_ a message _or_ a scheduled message.
Since files are removed from disk when the ArchivedAttachment rows are
deleted, this meant that if an upload was referenced in two messages,
and one was deleted, the file was permanently deleted when the
ArchivedMessage and ArchivedAttachment were cleaned up, despite being
still referenced in live Messages and Attachments.
Switch from `.exclude(messages=None).exclude(scheduled_messages=None)`
to `.exclude(messages=None, scheduled_messages=None)` which "OR"s
those conditions appropriately.
Pull the relevant test into its own file, and expand it significantly
to cover this, and other, corner cases.
I move the helper user_ids_to_users to the only
place that it's used, and then I simplify it to
do a direct database query.
These endpoints aren't hit often enough to justify
caching complexity, and for really large user groups,
hitting the cache can actually be counterproductive.
Particularly when you add new users to an existing
group, the bulk of the cost is sending out
notification messages to users.
The only change to the test is that I added an
assertion on the query count.
The most expensive thing for adding user groups is sending
all the notification messages, but we at least want to make
sure that the basic stuff runs in constant time.
The cross-realm bots rarely change, and there are only
a few of them, so we just query them all at once and
put them in the cache.
Also, we put the dictionaries in the cache, instead of
the user objects, since there is nothing time-sensitive
about the dictionaries, and they are small. This saves
us a little time computing the avatar url and things
like that, not to mention marshalling costs.
This commit also fixes a theoretical bug where we would
have stale cache entries if somebody somehow modified
the cross-realm bots without bumping KEY_PREFIX.
Internally we no longer pre-fetch the realm objects for
the bots, but we don't get overly precise about picking
individual fields from UserProfile, since we rarely hit
the database and since we don't store raw ORM objects
in the cache.
The test diffs make it look like we are hitting the
cache an extra time, but the tests weren't counting
bulk fetches. Now we only use a single key for all
bots rather a key per bot.
The bulk_get_users() function was only being used to
get cross-realm bots.
It appears that it was introduced in
f02e5b90f6 for that
specific use case.
Now we make the function more specific and test it more
accurately.
We also eliminate a lot of janky code and comments,
including some code that never had test coverage.
Incidentally, it appears that we did not have any code
to invalidate the cache keys here, and that is still
the case. In practice I assume people rarely
re-configure their cross-realm bots unless they are
upgrading the server, and then KEY_PREFIX comes into
play. 25fd4c5508 seems
to have caused that hopefully harmless regression.
A further step will be to make this cache more coarse,
since there are only a few cross-realm bots. The next
commit will hopefully simplify the code and address the
validation pitfall.
Earlier the API endpoints related to user_group accepts and returns a
field `can_mention_group_id` which represents the ID
of user_group whose members can mention the group.
This commit renames this field to `can_mention_group`.
Earlier the API endpoints related to streams accepts and returns a
field `can_remove_subscribers_group_id` which represents the ID
of user_group whose members can remove subscribers from stream.
This commit renames this field to `can_remove_subscribers_group`.
An exception which escapes from this loop can kill the background
worker thread; this results in consuming the queue (leading to the
illusion of progress) but more and more rows silently piling up in the
ScheduledMessageNotificationEmail table.
Wrap the inside of the `while True` loop in a try/catch to make sure
that no exceptions escape and kill the background thread. To prevent
even more indentation, the inner loop is extracted into its own
function. It returns true/false to signal if the `self.stopping` was
set to tell the loop to stop; we cannot check it ourselves in the
outer loop because it needs to hold the lock to be examined.
Previously, the view function was responsible for doing a first pass of
the validations done for RealmPlayground. It is no longer true now. This
refactors do_add_realm_playground to check_add_realm_playground and make
it responsible for validating the playground fields and doing error
handling for the ValidationError raised.
Dropping support for url_prefix for RealmPlayground, the server now uses
url_template instead only for playground creation, retrieval and audit
logging upon removal.
This does the necessary handling so that url_template is expanded with
the extracted code.
Fixes#25723.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the stray strings used to refer to
various types of notification triggers.
We use the attributes of the 'NotificationTriggers' class instead.
We populate url_template by simply escaping "{" and "}" as well as
appending "{code}" to the end of the legacy url_prefix.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
As an intermediate step before we fully support url_template for realm
playgrounds, we populate url_template in the backend ensuring that all
the new entries will be validated. With a later backfilling migration,
we prepare the database such that all the records will have a valid URL
template.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Having a more precise type annotation helps with ensuring the migration
to use URL templates gets type checked.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit updates `0455_set_default_for_can_mention_group`
migration to be more efficient when running for a large number
of UserGroup objects.
Previously, we did a loop over all UserGroup objects and
then did a `bulk_update`. All this happened in a single
transaction and the transaction was being hold for
unacceptably long time for a server with large number
of user groups. Also the SQL generated by Django for
`bulk_update` took almost quadratic time to evaluate,
as the SQL had linear length "CASE" statement which was
being resolved for each row.
We instead now use ".update" so that we can write the migration
without using loop and update the objects in batches of size
1000 so that we do not hold a transaction for very long time.
This also helps in avoiding the inefficient SQL that was being
executed due to using `bulk_update`.
We also update the queries to exclude the groups that already
have `can_mention_group` set to a non-null value, as this will
help in migration completing quickly when running it more than
once.
Updates the realm field `default_code_block_language` to have a default
value of an empty string instead of None. Also updates the web-app to
check for the empty string and not `null` to indicate no default is set.
This means that both new realms and existing realms that have no default
set will have the same value for this setting: an empty string.
Previously, new realms would have None if no default was set, while realms
that had set and then unset a value for this field would have an empty
string when no default was set.
Expands support for the message ID operand for id" operator to be either
a string or an integer. Previously, this operand was always validated as
a string.
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We used to access the complete objects for UserProfile foreign
keys like "bot_owner" and "default_sending_stream", where we only
needed ID of them.
This commit fixes some of such instances and now we directly get
the id using "bot_owner_id" and "default_sending_stream_id" so
that we can avoid the unnecessary complexity of accessing the
complete object.
This commit updates code to pass "realm" and "bot_owner" args to
select_related call in get_users. We pass "realm" and "bot_owner"
args to get_users because the caches which this function is used
to populate are used for get_user and get_user_profile_by_api_key
functions and they also select both these fields when querying for
UserProfile objects.
This commit updates the select_related calls in queries to get
UserProfile objects in get_user, get_user_by_delivery_email,
get_user_profile_by_id, get_user_profile_by_id_in_realm and
get_user_profile_by_api_key functions to pass "realm" and
"bot_owner" as arguments to select_related call.
These functions are used in different parts of code to get
the UserProfile object and realm is accessed using the user
object at many places.
"bot_owner" field is also used in some places like to check
whether a bot can access a stream, to check whether a user
can change modify another user, in webhooks code to send the
message to the bot owner, and in tests as well. There can be
some places where the bot owner is not required and in most
such cases the code would only be accessed for human users,
which means the bot_owner will be null for these cases and
would avoid complexity and performance issues.
Note that previously, no arguments were passed to select_related
and thus only realm field was fetched during the query.
This commit updates the select_related calls in queries to
get UserProfile object in get_syste_bot function pass "realm"
as argument to select_related call.
The "get_system_bot" call function is mostly used to get cross
realm bot which are used as senders to send messages.
The fields like default_events_register_stream and recipient
are not required for these cases. The bot_owner field is used
to check access to a stream to send message but the cross-realm
bots are handled differently and the bot_owner check is not
required.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass
"realm" as argument in select_related call so that we can make
sure that only required fields are selected in case we add
more foreign keys to UserProfile in future.
This commit updates the select_related calls in queries to
get UserProfile objects in get_user function called in
management commands to pass "realm" as argument to
select_related call.
There are some management commands like deactivate_user,
change_full_name, etc. which might need fields like
"default_sending_stream" when changing full name of a bot
or something similar, but we don't think that would happen
often and we can afford to have a DB round trip to get
these fields if needed.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are
selected in case we add more foreign keys to UserProfile
in future.
This commit updates the select_related calls in queries
to get UserProfile objects in sync_ldap_user_data code
to pass "realm" as argument to select_related call.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are
selected in case we add more foreign keys to UserProfile
in future.
This commit updates select_related call in get_user_profile_by_email
to pass "realm" as argument.
This function is intended to be used for manual manage.py shell
work so we just keep the behavior same as before as "realm" is
the only non-null related field in UserProfile.
This commit updates the select_related calls in queries
to get UserProfile objects in send_custom_email code to
pass "realm" as argument to select_related call.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are selected
in case we add more foreign keys to UserProfile in future.
This commit updates the select_related calls in queries to
get UserProfile objects in dev_login code to pass "realm"
as argument to select_related call.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
This commit updates select_related call to pass "realm" as
argument in select_related call in fetch_users_by_id function
as we only require realm for the UserProfile objects fetched
using fetch_users_by_id.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
We do not use any related fields for the UserProfile objects
fetched by get_active_users, so we can simply remove the
select_related call.
The user object from get_active_users was used to get realm
but since get_active_users called from a realm object we can
directly use that realm object. This change also leads to
some changes in the cache code where we now pass the realm
to the function instead of selecting it from UserProfile object.
This commit removes select_related call from
get_soft_deactivated_users_for_catch_up as
we do not use any related fields for the
UserProfile objects fetched using this call.
Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
‘blocklist’ was added in 0.0.35 (with backwards compatibility for the
old name), and type annotations were added in 0.0.91 (with only the
new name).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This code removes a lot of complexity with very likely
positive overall impact on system performance and
negligible downside.
We already cache display recipients on a per-user
level, so there's no need for another cache layer on
top of that that keys them with recipient ids.
We avoid strange things where Alice/Bob and Bob/Charlie
get put into the top layer cache and then we still have
a cache miss on Alice/Charlie despite the lower level
cache being able to support per-user lookups.
This change does introduce an extra database round trip
if any of our messages have a huddle, but the query is
extremely cheap, and we can always try to cache that
function more directly or try to re-use some of our
other huddle-based caches.
As part of this, we clean up the names for the
lower-level per-user cache of display recipients, and
we simplify the cache keys.
We also stop passing in a full Recipient object to the
`bulk_get_huddle_user_ids` functions.
The local impact of this change should be easy to
measure (at least approximately), since we use this
function every time a user gets messages via the
/messages endpoint.
The only overlap between how we fetched streams and
users was to share some really complicated data
structures.
We can also short-circuit some logic if a message
batch is either all-stream or all-DM.
We restrict the columns, avoid quadratic looping,
and don't bother with order_by.
We also return the user ids (per recipient) as
sets, since that's how the only caller uses the
info (albeit implicitly via set.union accepting
a list).
It’s unclear what was supposed to be “safe” about this wrapper. The
hashlib API is fine without it, and we don’t want to encourage further
use of SHA-1.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
- Adds `message_handle_match` function to handle new pattern for
relative help links to "Drafts" and "Scheduled messages" for logged-in
users: `{relative|message|drafts}` and `{relative|message|scheduled}`.
`remove_denormalized_recipient_column_from_data` removes the
`recipient` data from `zerver_userprofile`, but did not remove it from
`zerver_userprofile_mirrordummy`, which was later appended to the list
of `zerver_userprofile` objects. This led to failure when inserting,
as the mirrordummy objects still tried to reference their previous
`recipient_id`s.
Move the merging of the two sets earlier, before we call
`remove_denormalized_recipient_column_from_data`.
If there are two huddles, with users A + B + C + D and A + B + C, and
user D is deleted, it is replaced with a mirrordummy user. If
mirrordummy subscriptions are not included in exports, then the two
huddles have duplicate member sets, and will not be able to be
imported successfully.
Include huddle subscriptions for mirrordummy users in exports.
Fundamentally, we should take a write lock on the message, check its
validity for a change, and then make and commit that change.
Previously, `check_update_message` did not operate in a transaction,
but `do_update_message` did -- which led to the ordering:
- `check_update_message` reads Message, not in a transaction
- `check_update_message` verifies properties of the Message
- `do_update_message` starts a transaction
- `do_update_message` takes a read lock on UserMessage
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `do_update_message` commits
This leads to race conditions, where the `check_update_message` may
have verified based on stale data, and `do_update_message` may
improperly overwrite it; as well as deadlocks, where
other (properly-written) codepaths take a write lock on Message
_before_ updating UserMessage, and thus deadlock with
`do_update_message`.
Change `check_update_message` to open a transaction, and take the
write lock when first accessing the Message row. We update the
comment above `do_update_message` to clarify this expectation.
The new ordering is thus:
- `check_update_message` starts a transaction
- `check_update_message` takes a write lock on Message
- `check_update_message` verifies properties of the Message
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `check_update_message` commits
The long-term idle topic participants are soft-reactivated
after email/push notifications are sent due to @topic mention.
The reason being that, generally, @topic mentions are going to
reach a small set of users who have a decent chance of being
reactivated by the notifications.
This commit completes the notifications part of the @topic
wildcard mention feature.
Notifications are sent to the topic participants for the
@topic wildcard mention.
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
This migration is reasonably complex because of various anomalies in existing
data.
Note that there are cases when extra_data does not contain data that is
proper json with possibly single quotes. Thus we need to use
"ast.literal_eval" to cover that.
There is also a special case for "event_type == USER_FULL_NAME_CHANGED",
where extra_data is a plain str. This event_type is only used for
RealmAuditLog, so the zilencer migration script does not need to handle
it.
The migration does not handle "event_type == REALM_DISCOUNT_CHANGED"
because ast.literal_eval only allow Python literals. We expect the admin
to populate the jsonified extra_data for extra_data_json manually
beforehand.
This chunks the backfilling migration to reduce potential block time.
The migration for zilencer is mostly similar to the one for zerver; except that
the backfill helper is added in a wrapper and unrelated events are
removed.
**Logging and error recovery**
We print out a warning when the extra_data_json field of an entry
would have been overwritten by a value inconsistent with what we derived
from extra_data. Usually this only happens when the extra_data was
corrupted before this migration. This prevents data loss by backing up
possibly corrupted data in extra_data_json with the keys
"inconsistent_old_extra_data" and "inconsistent_old_extra_data_json".
More roundtrips to the database are needed for inconsistent data, which are
expected to be infrequent.
This also outputs messages when there are audit log entries with decimals,
indicating that such entries are not backfilled. Do note that audit log
entries with decimals are not populated with "inconsistent_old_extra_data_*"
in the JSONField, because they are not overwritten.
For such audit log entries with "extra_data_json" marked as inconsistent,
we skip them in the migration. Because when we have discovered anomalies in a
previous run, there is no need to overwrite them again nesting the extra keys
we added to it.
**Testing**
We create a migration test case utilizing the property of bulk_create
that it doesn't call our modified save method.
We extend ZulipTestCase to support verifying console output at the test
case level. The implementation is crude but the use case should be rare
enough that we don't need it to be too elaborate.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We did not send the stream creation events when subscribing
guests to public streams while we do send them when subscribing
non-admin users to private streams.
This commit adds code to send the stream creation events when
subscribing guests to public streams, so the clients can know
that the stream exists and fixes the bug where client tries
to process a subscription add event for a stream which it does
not know about.
This commit updates description for stream creation event
to mention that the event is also sent when user gains
access to a stream either due to being subscribed to it
or if a private stream is made public.
This tracks user group membership changes when the realm is first set
up, either through an import or not. This happens when we add users to
the system user groups by their roles.
For an imported realm, we do extra handling when the data doesn't include
user groups. This gets audited as well.
Django seems to have an aggressive check on the type of a field when
setting it through an relation, requiring the argument to be a UserGroup in
our case.
Reference:
02966a30dd/django/db/models/base.py (L537-L546)
The MissedMessage queue worker is the single callsite of
`handle_missedmessage_emails`, which immediately transforms the list
of events into a dict keyed by message-id.
Skip the intermediate list step, and use defaultdict and a dataclass
to simplify and make explicit the pieces. This removes the unused
user_profile_id and message_id pieces of the data structure.
This commit adds a boolean field `mentions_topic_wildcard`
to the `MessageRenderingResult` dataclass.
The field is set to true only if message rendering determines
the message has an actual topic wildcard mention in it (and not,
e.g., topic wildcard mention syntax inside a code block).
The rendered content for topic wildcard mention is
'<span class="topic-mention">{wildcard}</span>'.
The 'topic-mention' class is the identifier for the wildcard
mention being a topic wildcard mention.
We don't use 'data-user-id="*"' and "user-mention" class for
topic wildcard mentions and eventually plan to remove them for
stream wildcard mentions too in a separate mini-project.
This prep commit merges separate tests for '**@all**',
'**@stream**' and '**@everyone**' stream wildcard mentions
into a single test named 'test_mention_stream_wildcard'.
Similarly, it merges separate tests for '@all', '@stream',
and '@everyone' stream wildcard mentions into a single test
named 'test_mention_at_stream_wildcard'.
The aim is to finally have two separate tests for stream and
topic wildcard mentions (when we introduce topic wildcards)
instead of having separate tests for each mention text
(i.e. all, everyone, stream, topic).
This commit adds the 'topic_wildcard_mention_user_ids' and
'topic_wildcard_mention_in_followed_topic_user_ids'
attributes to the 'RecipientInfoResult' dataclass.
Only topic participants are notified of @topic mentions.
Topic participants are anyone who sent a message to a topic
or reacted to a message on the topic.
'topic_wildcard_mention_in_followed_topic_user_ids' stores the
ids of the topic participants who follow the topic and have
enabled the wildcard mention notifications for followed topics.
'topic_wildcard_mention_user_ids' stores the ids of the topic
participants for whom 'user_allows_notifications_in_StreamTopic'
with setting 'wildcard_mentions_notify' returns True.
This commit adds a 'has_topic_wildcards' instance variable
to the 'MentionData' class for the detection of
- possible topic wildcards mentions.
Fixes part of #22829.
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
Co-authored-by: orientor <aditya.verma@students.iiit.ac.in>
This commit updates the existing tests in 'test_email_notifications'
and 'test_push_notifications' to properly configure user settings
and visibility policies before running the actual tests.
Earlier, the tests were passing, but the corner case expected
to be covered wasn't covered.
This should have been included in
d80779435a.
These comments should not have been included in
a8fd9eb701.
We covered the case "Private message should soft reactivate
the user" earlier in the test. So the comment was rightly added
there.
During stream wildcard or group mention, no such personal mention
is involved; hence, the comments are not needed.
This is a prep commit to replace 'wildcard' with 'stream_wildcard'.
This wasn't included in 179d5cb because we didn't decide to
use a different rendered_content for topic wildcard mention,
i.e., ''<span class="user-mention topic-mention">{wildcard}</span>'.
Our intention was not to create separate tests for both stream
and topic wildcard mentions, as they were expected to have the
same rendered content format.
Previously this limit was 1 week, which was fine for busy
organizations, but for organizations that send a few messages a week,
or have occasional bursts of activity but the last one was a few weeks
ago, this should give a significantly better new user experience.
There are still caps like 1000 messages total and 20
unread, but we're a bit more flexible about time.
This includes changing the URL to #settings/preferences, with a
transparent redirect so that existing links, like the one from Welcome
Bot, continue to work.
Pass the HttpRequest explicitly through the two webhooks that log to
the webhook loggers.
get_current_request is now unused, so remove it (in the same commit
for test coverage reasons).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
The followup_day2 email is scheduled with a delay as a welcome email
and is therefore more likely to exist as a scheduled email in these
deactivation cases.
Updates comment to not include the number of emails generated so
that it doesn't need to be updated every time a new email is added.
The current count in the comment is already out-of-date.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
This changes bulk_get_streams so that it just uses the
database all the time. Also, we avoid calling
select_related(), so that we just get back thin and
tidy Stream objects with simple queries.
About not caching any more:
It's actually pretty rare that we fetch streams by name
in the main application. It's usually API requests that
send in stream names to find more info about streams.
It also turns out that for large queries (>= ~30 rows
for my measurements) it's more efficent to hit the
database than memcached. The database is super fast at
scale; it's just the startup cost of having Django
construct the query, and then having the database do
query planning or whatever, that slows us down. I don't
know the exact bottleneck, but you can clearly measure
that one-row queries are slow (on the order of a full
millisecond or so) but the marginal cost of additional
rows is minimal assuming you have a decent index (20
microseconds per row on my droplet).
All the query-count changes in the tests revolve around
unsubscribing somebody from a stream, and that's a
particularly odd use case for bulk_get_streams, since
you generally unsubscribe from a single stream at a
time. If there are some use cases where you do want to
unsubscribe from multiple streams, we should move
toward passing in stream ids, at least from the
application. And even if we don't do that, our cost for
most queries is a couple milliseconds.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This commit creates separate events for issue milestoned and
demilestoned notifications. This allows the end-users to choose
whether they want these notifications or not.
Fixes#25793.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This add audit log entries when the name or description of a user group
is updated. We store both the old and new values in extra_data. We wrap
the functions inside an atomic transaction so that the audit logs and
the updates are committed together.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>