Fundamentally, we should take a write lock on the message, check its
validity for a change, and then make and commit that change.
Previously, `check_update_message` did not operate in a transaction,
but `do_update_message` did -- which led to the ordering:
- `check_update_message` reads Message, not in a transaction
- `check_update_message` verifies properties of the Message
- `do_update_message` starts a transaction
- `do_update_message` takes a read lock on UserMessage
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `do_update_message` commits
This leads to race conditions, where the `check_update_message` may
have verified based on stale data, and `do_update_message` may
improperly overwrite it; as well as deadlocks, where
other (properly-written) codepaths take a write lock on Message
_before_ updating UserMessage, and thus deadlock with
`do_update_message`.
Change `check_update_message` to open a transaction, and take the
write lock when first accessing the Message row. We update the
comment above `do_update_message` to clarify this expectation.
The new ordering is thus:
- `check_update_message` starts a transaction
- `check_update_message` takes a write lock on Message
- `check_update_message` verifies properties of the Message
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `check_update_message` commits
Previously clicking on a message in search results opened
the compose box. This was not great, because search views
show messages outside of their context, and replying to a
message without the context of possible more recent
messages sent to that narrow can result in a bad experience.
This commit prevents the composebox from opening on click and instead
moves the user to a near view for the message, and similarly with the
enter keyboard shortcut.
Fixes#21217.
Removes the `assert_number` over defensive function which was being used
for validating the type of `stream_id` but now that this module is
converted to TypeScript we do not need this function anymore since
typescript compiler will do the type checking for us.
In this commit, we have replaced `build_stream_list()` with
`update_streams_sidebar()`. This change avoids zooming out from the
"more topics" view when toggling your subscription to a different stream.
It also triggers a forced rerender of the left sidebar only when
unsubscribing from the currently active stream. The stream list will be
updated once the user zooms out.
The `stream_list.build_stream_list()` function is responsible for
rendering the stream list in the left sidebar. Previously, it uses
`topic_list.clear()` to clear the `active_widgets` that tracks active
stream(s) in the left sidebar. This led to a bug where rendering stream
list after adding or removing a stream through `build_stream_list()`
while a stream was zoomed in ("more topics" clicked) didn't fully exit
the "more topics" view. So clicking "BACK TO STREAMS" throws an exception
with the message "Error: Unexpected number of topic lists to zoom out"
because `active_widgets`, cleared by `topic_list.clear()`, became
inaccessible.
To address this issue, instead of clearing the topics list with
`topic_list.clear()`, the code now ensures the closure of the zoomed
stream by using `topic_zoom.clear_topics()`. This change ensures a
proper exit from the "more topics" view and avoids the exception.
Fixes#25670.
The long-term idle topic participants are soft-reactivated
after email/push notifications are sent due to @topic mention.
The reason being that, generally, @topic mentions are going to
reach a small set of users who have a decent chance of being
reactivated by the notifications.
This commit completes the notifications part of the @topic
wildcard mention feature.
Notifications are sent to the topic participants for the
@topic wildcard mention.
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
This fixes some deprecation use of the APIs, including using "new"
instead of "true" for the "headless" flag when launching puppeteer and
using $$ instead of $x when using XPath selectors.
We also use {waitUntil: "networkidle2"} to fix the issue of webhook bot
avatar not being loaded consistently when generating the screenshots
(this happened to webhooks like Slack and Harbor).
Positioning of the clip area is adjusted to take the new grid layout
into account.
This allows the user to skip generating screenshots up to certain
integration when working through generating screenshots for all
integrations.
This also allow the user to select a number of integrations to
selectively generate screenshots for.
This commit also demonstrates how precise line-heights contribute
to a design: by matching the line-height on the avatar's grid area
to the dimensions of the square avatar image, it's possible to
center the first line of text (me-messages, in this case) with a
non-text element.
This migration is reasonably complex because of various anomalies in existing
data.
Note that there are cases when extra_data does not contain data that is
proper json with possibly single quotes. Thus we need to use
"ast.literal_eval" to cover that.
There is also a special case for "event_type == USER_FULL_NAME_CHANGED",
where extra_data is a plain str. This event_type is only used for
RealmAuditLog, so the zilencer migration script does not need to handle
it.
The migration does not handle "event_type == REALM_DISCOUNT_CHANGED"
because ast.literal_eval only allow Python literals. We expect the admin
to populate the jsonified extra_data for extra_data_json manually
beforehand.
This chunks the backfilling migration to reduce potential block time.
The migration for zilencer is mostly similar to the one for zerver; except that
the backfill helper is added in a wrapper and unrelated events are
removed.
**Logging and error recovery**
We print out a warning when the extra_data_json field of an entry
would have been overwritten by a value inconsistent with what we derived
from extra_data. Usually this only happens when the extra_data was
corrupted before this migration. This prevents data loss by backing up
possibly corrupted data in extra_data_json with the keys
"inconsistent_old_extra_data" and "inconsistent_old_extra_data_json".
More roundtrips to the database are needed for inconsistent data, which are
expected to be infrequent.
This also outputs messages when there are audit log entries with decimals,
indicating that such entries are not backfilled. Do note that audit log
entries with decimals are not populated with "inconsistent_old_extra_data_*"
in the JSONField, because they are not overwritten.
For such audit log entries with "extra_data_json" marked as inconsistent,
we skip them in the migration. Because when we have discovered anomalies in a
previous run, there is no need to overwrite them again nesting the extra keys
we added to it.
**Testing**
We create a migration test case utilizing the property of bulk_create
that it doesn't call our modified save method.
We extend ZulipTestCase to support verifying console output at the test
case level. The implementation is crude but the use case should be rare
enough that we don't need it to be too elaborate.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We did not send the stream creation events when subscribing
guests to public streams while we do send them when subscribing
non-admin users to private streams.
This commit adds code to send the stream creation events when
subscribing guests to public streams, so the clients can know
that the stream exists and fixes the bug where client tries
to process a subscription add event for a stream which it does
not know about.
This commit updates description for stream creation event
to mention that the event is also sent when user gains
access to a stream either due to being subscribed to it
or if a private stream is made public.
This satisfies the linter's desire for the `grid-template`
shorthand while keeping the illustrative template areas in
their own little diagram-like property.
This tracks user group membership changes when the realm is first set
up, either through an import or not. This happens when we add users to
the system user groups by their roles.
For an imported realm, we do extra handling when the data doesn't include
user groups. This gets audited as well.
Django seems to have an aggressive check on the type of a field when
setting it through an relation, requiring the argument to be a UserGroup in
our case.
Reference:
02966a30dd/django/db/models/base.py (L537-L546)
The MissedMessage queue worker is the single callsite of
`handle_missedmessage_emails`, which immediately transforms the list
of events into a dict keyed by message-id.
Skip the intermediate list step, and use defaultdict and a dataclass
to simplify and make explicit the pieces. This removes the unused
user_profile_id and message_id pieces of the data structure.
supervisord's log rotation is only "every x bytes" which is not a good
enough policy for tracking auditing logs. The default is also 10 logs
of 50MB, which is very much not enough for active instances.
Switch to tracking 14 days of daily logs.
This commit adds a boolean field `mentions_topic_wildcard`
to the `MessageRenderingResult` dataclass.
The field is set to true only if message rendering determines
the message has an actual topic wildcard mention in it (and not,
e.g., topic wildcard mention syntax inside a code block).
The rendered content for topic wildcard mention is
'<span class="topic-mention">{wildcard}</span>'.
The 'topic-mention' class is the identifier for the wildcard
mention being a topic wildcard mention.
We don't use 'data-user-id="*"' and "user-mention" class for
topic wildcard mentions and eventually plan to remove them for
stream wildcard mentions too in a separate mini-project.
This prep commit merges separate tests for '**@all**',
'**@stream**' and '**@everyone**' stream wildcard mentions
into a single test named 'test_mention_stream_wildcard'.
Similarly, it merges separate tests for '@all', '@stream',
and '@everyone' stream wildcard mentions into a single test
named 'test_mention_at_stream_wildcard'.
The aim is to finally have two separate tests for stream and
topic wildcard mentions (when we introduce topic wildcards)
instead of having separate tests for each mention text
(i.e. all, everyone, stream, topic).
This commit adds the 'topic_wildcard_mention_user_ids' and
'topic_wildcard_mention_in_followed_topic_user_ids'
attributes to the 'RecipientInfoResult' dataclass.
Only topic participants are notified of @topic mentions.
Topic participants are anyone who sent a message to a topic
or reacted to a message on the topic.
'topic_wildcard_mention_in_followed_topic_user_ids' stores the
ids of the topic participants who follow the topic and have
enabled the wildcard mention notifications for followed topics.
'topic_wildcard_mention_user_ids' stores the ids of the topic
participants for whom 'user_allows_notifications_in_StreamTopic'
with setting 'wildcard_mentions_notify' returns True.