This commit does the backend changes required for adding a realm
setting based on groups permission model and does the API changes
required for the new setting `Who can create multiuse invite link`.
Creates process for demo organization owners to add an email address
and password to their account.
Uses the same flow as changing an email (via user settings) at the
beginning, but then sends a different email template to the user
for the email confirmation process.
We also encourage users to set their full name field in the modal for
adding an email in a demo organization. We disable the submit button
on the form if either input is empty, email or full name.
When the user clicks the 'confirm and set password' button in the
email sent to confirm the email address sent via the form, their
email is updated via confirm_email_change, but the user is redirected
to the reset password page for their account (instead of the page for
confirming an email change has happened).
Once the user successfully sets a password, then they will be
prompted to log in with their newly configured email and password.
To make creation of demo organizations feel lightweight for users,
we do not want to require an email address at sign-up. Instead an
empty string will used for the new realm owner's email. Currently
implements that for new demo organizations in the development
environment.
Because the user's email address does not exist, we don't enqueue
any of the welcome emails upon account/realm creation, and we
don't create/send new login emails.
This is a part of #19523.
Co-authored by: Tim Abbott <tabbott@zulip.com>
Co-authored by: Lauryn Menard <lauryn@zulip.com>
We now send stream creation and stream deletion events on
changing a user's role because a user can gain or lose
access to some streams on changing their role.
Users who used to be subscribed to a private stream and have been
removed from it since retain the ability to edit messages/topics, and
delete messages that they used to have access to, if other relevant
organization permissions allow these actions. For example, a user may be
able to edit or delete their old messages they posted in such a private
stream. An administrator will be able to delete old messages (that they
had access to) from the private stream.
We fix this by fixing the logic in has_message_access (which lies at the
core of our message access checks - access_message() and
bulk_access_messages())
to not rely on only a UserMessage row for checking access but also
verify stream type and subscription status.
**Background**
User groups are expected to comply with the DAG constraint for the
many-to-many inter-group membership. The check for this constraint has
to be performed recursively so that we can find all direct and indirect
subgroups of the user group to be added.
This kind of check is vulnerable to phantom reads which is possible at
the default read committed isolation level because we cannot guarantee
that the check is still valid when we are adding the subgroups to the
user group.
**Solution**
To avoid having another transaction concurrently update one of the
to-be-subgroup after the recursive check is done, and before the subgroup
is added, we use SELECT FOR UPDATE to lock the user group rows.
The lock needs to be acquired before a group membership change is about
to occur before any check has been conducted.
Suppose that we are adding subgroup B to supergroup A, the locking protocol
is specified as follows:
1. Acquire a lock for B and all its direct and indirect subgroups.
2. Acquire a lock for A.
For the removal of user groups, we acquire a lock for the user group to
be removed with all its direct and indirect subgroups. This is the special
case A=B, which is still complaint with the protocol.
**Error handling**
We currently rely on Postgres' deadlock detection to abort transactions
and show an error for the users. In the future, we might need some
recovery mechanism or at least better error handling.
**Notes**
An important note is that we need to reuse the recursive CTE query that
finds the direct and indirect subgroups when applying the lock on the
rows. And the lock needs to be acquired the same way for the addition and
removal of direct subgroups.
User membership change (as opposed to user group membership) is not
affected. Read-only queries aren't either. The locks only protect
critical regions where the user group dependency graph might violate
the DAG constraint, where users are not participating.
**Testing**
We implement a transaction test case targeting some typical scenarios
when an internal server error is expected to happen (this means that the
user group view makes the correct decision to abort the transaction when
something goes wrong with locks).
To achieve this, we add a development view intended only for unit tests.
It has a global BARRIER that can be shared across threads, so that we
can synchronize them to consistently reproduce certain potential race
conditions prevented by the database locks.
The transaction test case lanuches pairs of threads initiating possibly
conflicting requests at the same time. The tests are set up such that exactly N
of them are expected to succeed with a certain error message (while we don't
know each one).
**Security notes**
get_recursive_subgroups_for_groups will no longer fetch user groups from
other realms. As a result, trying to add/remove a subgroup from another
realm results in a UserGroup not found error response.
We also implement subgroup-specific checks in has_user_group_access to
keep permission managing in a single place. Do note that the API
currently don't have a way to violate that check because we are only
checking the realm ID now.
We want to make the callers be more explicit about the use of the
user group being accessed, so that the later implemented database lock
can be benefited from the visibility.
We do not want to access realm from "sender" field so that
we do not need to pass "sender__realm" argument to
select_related call when querying messages. We can instead
pass realm as argument to wildcard_mention_allowed.
We can directly get the realm object from Message object now
and there is no need to get the realm object from "sender"
field of Message object.
After this change, we would not need to fetch "sender__realm"
field using "select_related" and instead only passing "realm"
to select_related when querying Message objects would be enough.
This commit also updates a couple of cases to directly access
realm ID from message object and not message.sender. Although
we have fetched sender object already, so accessing realm_id
from message directly or from message.sender should not matter,
but we can be consistent to directly get realm from Message
object whenever possible.
We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the private stream suscriptions of the bot if the
original owner is deactivated and we change the owner to the user who
is reactivating the bot. We unsusbcribe the bot from private streams
that the new owner is not subscribed to.
Fixes part of #21700.
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
Earlier, for topic wildcard mentions, the 'wildcard_mentioned'
flag was set for all the user-messages. (similar to stream wildcard
mention).
The flag should be set for the topic participants only.
The bug was introduced in 4c9d26c.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This adds API support to reorder linkifiers and makes sure that the
returned lists of linkifiers from `GET /events`, `POST /register`, and
`GET /realm/linkifiers` are always sorted with the order that they
should processed when rendering linkifiers.
We set the new `order` field to the ID with the migration. This
preserves the order of the existing linkifiers.
New linkifiers added will always be ordered the last. When reordering,
the `order` field of all linkifiers in the same realm is updated, in
a manner similar to how we implement ordering for
`custom_profile_fields`.
This commit renames the keyword 'pm' to 'dm' in the
'pm_mention_email_disabled_user_ids' and
'pm_mention_push_disabled_user_ids' attributes of the
'RecipientInfoResult' dataclass.
'pm' and 'dm' are the acronyms for 'private message' and
'direct message' respectively.
It includes 'TODO/compatibility' code to support the old format
fields in the tornado queues during the Zulip server upgrades.
This commit renames the 'PRIVATE_MESSAGE' attribute of the
'NotificationTriggers' class to 'DIRECT_MESSAGE'.
Custom migration to update the existing value in the database.
It includes 'TODO/compatibility' code to support the old
notification trigger value 'private_message' in the
push notification queue during the Zulip server upgrades.
Earlier 'private_message' was one of the possible values for the
'trigger' property of the '[`POST /zulip-outgoing-webhook`]' response;
Update the docs to reflect the change in the above-mentioned trigger
value.
This commit adds code to pass stream traffic data using
the "stream_weekly_traffic" field in stream objects.
We already include the traffic data in Subscription objects,
but the traffic data does not depend on the user to stream
relationship and is stream-only information, so it's better
to include it in Stream objects. We may remove the traffic
data and other stream information fields for Subscription
objects in future.
This will help clients to correctly display the stream
traffic data in case where client receives a stream
creation event and no subscription event, for an already
existing stream which the user did not have access to before.
This commit adds stream_to_dict method which is same as
Stream.to_dict method as of now. This is a prep commit
to include stream traffic data in stream objects.
Previously, this code:
```python3
old_archived_attachments = ArchivedAttachment.objects.annotate(
has_other_messages=Exists(
Attachment.objects.filter(id=OuterRef("id"))
.exclude(messages=None)
.exclude(scheduled_messages=None)
)
).filter(messages=None, create_time__lt=delta_weeks_ago, has_other_messages=False)
```
...protected from removal any ArchivedAttachment objects where there
was an Attachment which had _both_ a message _and_ a scheduled
message, instead of _either_ a message _or_ a scheduled message.
Since files are removed from disk when the ArchivedAttachment rows are
deleted, this meant that if an upload was referenced in two messages,
and one was deleted, the file was permanently deleted when the
ArchivedMessage and ArchivedAttachment were cleaned up, despite being
still referenced in live Messages and Attachments.
Switch from `.exclude(messages=None).exclude(scheduled_messages=None)`
to `.exclude(messages=None, scheduled_messages=None)` which "OR"s
those conditions appropriately.
Pull the relevant test into its own file, and expand it significantly
to cover this, and other, corner cases.
Earlier the API endpoints related to user_group accepts and returns a
field `can_mention_group_id` which represents the ID
of user_group whose members can mention the group.
This commit renames this field to `can_mention_group`.
Earlier the API endpoints related to streams accepts and returns a
field `can_remove_subscribers_group_id` which represents the ID
of user_group whose members can remove subscribers from stream.
This commit renames this field to `can_remove_subscribers_group`.
Previously, the view function was responsible for doing a first pass of
the validations done for RealmPlayground. It is no longer true now. This
refactors do_add_realm_playground to check_add_realm_playground and make
it responsible for validating the playground fields and doing error
handling for the ValidationError raised.
Dropping support for url_prefix for RealmPlayground, the server now uses
url_template instead only for playground creation, retrieval and audit
logging upon removal.
This does the necessary handling so that url_template is expanded with
the extracted code.
Fixes#25723.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the stray strings used to refer to
various types of notification triggers.
We use the attributes of the 'NotificationTriggers' class instead.
As an intermediate step before we fully support url_template for realm
playgrounds, we populate url_template in the backend ensuring that all
the new entries will be validated. With a later backfilling migration,
we prepare the database such that all the records will have a valid URL
template.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Having a more precise type annotation helps with ensuring the migration
to use URL templates gets type checked.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We used to access the complete objects for UserProfile foreign
keys like "bot_owner" and "default_sending_stream", where we only
needed ID of them.
This commit fixes some of such instances and now we directly get
the id using "bot_owner_id" and "default_sending_stream_id" so
that we can avoid the unnecessary complexity of accessing the
complete object.
We do not use any related fields for the UserProfile objects
fetched by get_active_users, so we can simply remove the
select_related call.
The user object from get_active_users was used to get realm
but since get_active_users called from a realm object we can
directly use that realm object. This change also leads to
some changes in the cache code where we now pass the realm
to the function instead of selecting it from UserProfile object.
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fundamentally, we should take a write lock on the message, check its
validity for a change, and then make and commit that change.
Previously, `check_update_message` did not operate in a transaction,
but `do_update_message` did -- which led to the ordering:
- `check_update_message` reads Message, not in a transaction
- `check_update_message` verifies properties of the Message
- `do_update_message` starts a transaction
- `do_update_message` takes a read lock on UserMessage
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `do_update_message` commits
This leads to race conditions, where the `check_update_message` may
have verified based on stale data, and `do_update_message` may
improperly overwrite it; as well as deadlocks, where
other (properly-written) codepaths take a write lock on Message
_before_ updating UserMessage, and thus deadlock with
`do_update_message`.
Change `check_update_message` to open a transaction, and take the
write lock when first accessing the Message row. We update the
comment above `do_update_message` to clarify this expectation.
The new ordering is thus:
- `check_update_message` starts a transaction
- `check_update_message` takes a write lock on Message
- `check_update_message` verifies properties of the Message
- `do_update_message` writes on UserMessage
- `do_update_message` writes Message
- `check_update_message` commits
This commit completes the notifications part of the @topic
wildcard mention feature.
Notifications are sent to the topic participants for the
@topic wildcard mention.
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
We did not send the stream creation events when subscribing
guests to public streams while we do send them when subscribing
non-admin users to private streams.
This commit adds code to send the stream creation events when
subscribing guests to public streams, so the clients can know
that the stream exists and fixes the bug where client tries
to process a subscription add event for a stream which it does
not know about.
This commit adds the 'topic_wildcard_mention_user_ids' and
'topic_wildcard_mention_in_followed_topic_user_ids'
attributes to the 'RecipientInfoResult' dataclass.
Only topic participants are notified of @topic mentions.
Topic participants are anyone who sent a message to a topic
or reacted to a message on the topic.
'topic_wildcard_mention_in_followed_topic_user_ids' stores the
ids of the topic participants who follow the topic and have
enabled the wildcard mention notifications for followed topics.
'topic_wildcard_mention_user_ids' stores the ids of the topic
participants for whom 'user_allows_notifications_in_StreamTopic'
with setting 'wildcard_mentions_notify' returns True.
Previously this limit was 1 week, which was fine for busy
organizations, but for organizations that send a few messages a week,
or have occasional bursts of activity but the last one was a few weeks
ago, this should give a significantly better new user experience.
There are still caps like 1000 messages total and 20
unread, but we're a bit more flexible about time.
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This add audit log entries when the name or description of a user group
is updated. We store both the old and new values in extra_data. We wrap
the functions inside an atomic transaction so that the audit logs and
the updates are committed together.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mostly the same as tracking subgroup changes, except that now
modified_user_group is the subgroup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It's worth noting that instead of adding another field to the
RealmAuditLog model, we store the modified subgroup ids in extra_data as
a JSON encoded dict with the key "subgroup_ids". We don't create audit
log entries for supergroup changes at this point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This also add audit log entries during user creation and role change,
because we modify system group memberships there.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We also create RealmAuditLog entries for the initial memberships that
get added along with the creation of a UserGroup. System user groups are
not created with members so no audit logs are populated for that.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Various cleanups:
* clean up comments
* improve names for constants and variables
* express first ORM query as a single statement
* use set differences to simplify logic
* avoid all the reversing churn
* avoid early-exit idiom since this function is so small
Note that it's plausible that we should just combine the two
queries and let the database exclude the already-used ids,
but that felt a little risky for now. As I mentioned on
Zulip, I think the one-week window has dubious value, but
I am biased by having wasted time chasing down a test
flake related to the time window.
Basically, I eliminate the use of select_all() in a query
that still makes a single round trip. We have good test
enforcement that Django never needs to lazily fetch
objects off the Stream object. (It used to be common
to fetch stream.realm a while back, but we upgraded
bulk_add_subscription, in particular, a while back.)
We extract code from process_new_human_user with
no modifications.
This has all the best outcomes of extracting a function:
* better profile info
* easier to test for query counts (signup gets real noisy)
* simplifies a long, messy function
It has no real drawbacks, since the helper function doesn't need
to pass back any intermediate state to the parent for the rest
of what the parent does.
When you profile test_signup and test_invite, with a decent
sample size, the set_up_streams_for_new_human_user function
does about 20% of the work for process_new_human_user, which
is a lot considering that most tests don't create a ton of
pre-registered or default streams.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
This prep commit replaces the 'wildcard' keyword in the codebase
with 'stream_wildcard' at some places for better readability, as
we plan to introduce 'topic_wildcards' as a part of the
'@topic mention' project.
Currently, 'wildcards = ["all", "everyone", "stream"]' which is an
alias to mention everyone in the stream, hence better renamed as
'stream_wildcards'.
Eventually, we will have:
'stream_wildcard' as an alias to mention everyone in the stream.
'topic_wildcard' as an alias to mention everyone in the topic.
'wildcard' refers to 'stream_wildcard' and 'topic_wildcard' as a whole.
This commit adds code to include can_mention_group_id field to
UserGroup objects passed with response of various endpoints
including "/register" endpoint and also in the group object
send with user group creation event.
Fixes a part of #25927.
This commit adds backend code to check whether a user is allowed
to mention a user group while editing a message as per
can_mention_group setting of that group.
Fixes a part of #25927.
This commit adds backend code to check whether user has permission
to mention a group while sending message as per the can_mention_group
setting of the group.
Fixes a part of #25927.
django-stubs 4.2.1 gives transaction.on_commit a more accurate type
annotation, but this exposed that mypy can’t handle the lambda default
parameters that we use to recapture loop variables such as
for stream_id in public_stream_ids:
peer_user_ids = …
event = …
transaction.on_commit(
lambda event=event, peer_user_ids=peer_user_ids: send_event(
realm, event, peer_user_ids
)
)
https://github.com/python/mypy/issues/15459
A workaround that mypy accepts is
transaction.on_commit(
(
lambda event, peer_user_ids: lambda: send_event(
realm, event, peer_user_ids
)
)(event, peer_user_ids)
)
But that’s kind of ugly and potentially error-prone, so let’s make a
helper function for this very common pattern.
send_event_on_commit(realm, event, peer_user_ids)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit makes it possible for users to control the wildcard
mention notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the push notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the email notifications for messages sent to followed topics
via a global notification setting.
Although there is no support for configuring this setting
through the UI yet.
Add five new fields to the UserBaseSettings class for
the "followed topic notifications" feature, similar to
stream notifications. But this commit consists only of
the implementation of email notifications.
Note that we use the DjangoJSONEncoder so that we have builtin support
for parsing Decimal and datetime.
During this intermediate state, the migration that creates
extra_data_json field has been run. We prepare for running the backfilling
migration that populates extra_data_json from extra_data.
This change implements double-write, which is important to keep the
state of extra data consistent. For most extra_data usage, this is
handled by the overriden `save` method on `AbstractRealmAuditLog`, where
we either generates extra_data_json using orjson.loads or
ast.literal_eval.
While backfilling ensures that old realm audit log entries have
extra_data_json populated, double-write ensures that any new entries
generated will also have extra_data_json set. So that we can then safely
rename extra_data_json to extra_data while ensuring the non-nullable
invariant.
For completeness, we additionally set RealmAuditLog.NEW_VALUE for
the USER_FULL_NAME_CHANGED event. This cannot be handled with the
overridden `save`.
This addresses: https://github.com/zulip/zulip/pull/23116#discussion_r1040277795
Note that extra_data_json at this point is not used yet. So the test
cases do not need to switch to testing extra_data_json. This is later
done after we rename extra_data_json to extra_data.
Double-write for the remote server audit logs is special, because we only
get the dumped bytes from an external source. Luckily, none of the
payload carries extra_data that is not generated using orjson.dumps for
audit logs of event types in SYNC_BILLING_EVENTS. This can be verified
by looking at:
`git grep -A 6 -E "event_type=.*(USER_CREATED|USER_ACTIVATED|USER_DEACTIVATED|USER_REACTIVATED|USER_ROLE_CHANGED|REALM_DEACTIVATED|REALM_REACTIVATED)"`
Therefore, we just need to populate extra_data_json doing an
orjson.loads call after a None-check.
Co-authored-by: Zixuan James Li <p359101898@gmail.com>
Part of splitting creating and editing scheduled messages.
Should be merged with final commit in series. Breaks tests.
Splits out editing an existing scheduled message into a new
view function and updated `edit_scheduled_message` function.
Part of splitting creating and editing scheduled messages.
Should be merged with final commit in series. Breaks tests.
Removes `scheduled_message_id` parameter from the create scheduled
message path.
As of commit 38f6807af1, we accept only stream and user IDs for
the recipient information for scheduled messages, which means we
can simplify the type for `message_to` in `check_schedule_message`.
We now allow users to change email address visibility setting
on the "Terms of service" page during first login. This page is
not shown for users creating account using normal registration
process, but is useful for imported users and users created
through API, LDAP, SCIM and management commands.
In the case of a user editing a scheduled message that the server
had failed to send at the scheduled time due to an error, we want
to update the `failed` and `failure_message` fields as the intent
is for the server to retry to send the scheduled message based on
the updated information provided by the user.
In the case that there is an error when sending a scheduled message,
we now send a message from the notification bot to the user who
scheduled the message about the failure/error.
The notification message is not sent if the error when sending the
scheduled message was due to the realm or sender being deactivated.
The code for updating visibility policy values on moving messages
had two bugs.
- There was a typo in elif condition where "user_profile" was being
used instead of "user_profile_with_policy".
This commit fixes the typo.
- It was assumed that there would be no UserTopic rows for target
topic if the target topic didn't exist. But there can be such case
where some messages were sent to that topic and the user muted
the topic. But then the messages in that topic was deleted. In
such case there can be UserTopic rows for a stream-topic pair
that does not exist.
This commit fixes the code to handle such case as well and set
the visibility policy of new topic to what was set for the original
topic. This change simplifies the condition to just check whether
new_visibility_policy is equal to target_topic_visibility_policy
and skip if so, and update the visibility policy otherwise.
Due to this change, we now do not try to mute the already muted
topic if the topic is moved to a topic which didn't exist
previously and thus we modify the existing test to not expect
any INFO logs.
We do not pass "email_address_visibility" to do_create_realm
anymore. It was passed before to set the setting for realms in
development database, but it has been changed since we changed
email_address_visibility to be a user-level setting instead
of realm-level setting since now it is set on RealmUserDefault
table.
We do not add user to the default streams if the streams list passed
while sending the invite (both email and multi-use) was empty since
invite explicitly selected to not subscribe the user to default
streams.
Previously, it seemed possible for the scheduled messages API to try
to send infinite copies of a message if we had the very poor luck of a
persistent failure happening after a message was sent.
The failure_message field supports being able to display what happened
in the scheduled messages modal, though that's not exposed to the API
yet.
For scheduled stream messages, we already limited the `to`
parameter to be the stream ID, but here we return a JsonableError
in the case of a ValueError when the passed value is not an integer.
For scheduled direct messages, we limit the list for the `to`
parameter to be user IDs. Previously, we accepted emails like
we do when sending messages.
Doing so causes the "username resolved this topic" or "this topic was
moved by username" notifications to be attributed to a random user who
had a visibility policy on the topic.
Fixes#25414.
We add Attachment.scheduled_messages relation to track ScheduledMessages
which reference the attachment.
The import bits can be done after merging this, by updating #25345.
Because education organizations and users have slightly specialized
use cases, we update the Welcome Bot message content sent to new
users and new organization owners for these types of organizations
to link to help center articles/guides geared toward these users
and organizations.
Also, updates the demo organization warning to only go to the new
demo organization owner because the 30 day deletion text is only
definitely accurate when the organization is created.
Fixes#21694.
In commit fc58c35c0, we added a check in various emails for the
settings.CORPORATE_ENABLED value, but that context is only always
included for views/templates with a request.
Here we add that to common_context, which is often used when there
is not a request (like with emails). And we manually add it to the
email context in various cases when there is not a user account to
call with common_context: new user invitations, registration emails,
and realm reactivation emails.
This was previously called delete_event_notify_user_ids, which seemed
to narrow its purpose in a way that was confusing given that it's also
used for other calculations.
Further, calculate it as soon as we know it, not when we're first
going to use it.
This also removes the error in one of these functions that was using a
different constant instead of
PRESENCE_LEGACY_EVENT_OFFSET_FOR_ACTIVITY_SECONDS.
This implements the core of the rewrite described in:
For the backend data model for UserPresence to one that supports much
more efficient queries and is more correct around handling of multiple
clients. The main loss of functionality is that we no longer track
which Client sent presence data (so we will no longer be able to say
using UserPresence "the user was last online on their desktop 15
minutes ago, but was online with their phone 3 minutes ago"). If we
consider that information important for the occasional investigation
query, we have can construct that answer data via UserActivity
already. It's not worth making Presence much more expensive/complex
to support it.
For slim_presence clients, this sends the same data format we sent
before, albeit with less complexity involved in constructing it. Note
that we at present will always send both last_active_time and
last_connected_time; we may revisit that in the future.
This commit doesn't include the finalizing migration, which drops the
UserPresenceOld table.
The way to deploy is to start the backfill migration with the server
down and then start the server *without* the user_presence queue worker,
to let the migration finish without having new data interfering with it.
Once the migration is done, the queue worker can be started, leading to
the presence data catching up to the current state as the queue worker
goes over the queued up events and updating the UserPresence table.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This swaps out url_format_string from all of our APIs and replaces it
with url_template. Note that the documentation changes in the following
commits will be squashed with this commit.
We change the "url_format" key to "url_template" for the
realm_linkifiers events in event_schema, along with updating
LinkifierDict. "url_template" is the name chosen to normalize
mixed usages of "url_format_string" and "url_format" throughout
the backend.
The markdown processor is updated to stop handling the format string
interpolation and delegate the task template expansion to the uri_template
library instead.
This change affects many test cases. We mostly just replace "%(name)s"
with "{name}", "url_format_string" with "url_template" to make sure that
they still pass. There are some test cases dedicated for testing "%"
escaping, which aren't relevant anymore and are subject to removal.
But for now we keep most of them as-is, and make sure that "%" is always
escaped since we do not use it for variable substitution any more.
Since url_format_string is not populated anymore, a migration is created
to remove this field entirely, and make url_template non-nullable since
we will always populate it. Note that it is possible to have
url_template being null after migration 0422 and before 0424, but
in practice, url_template will not be None after backfilling and the
backend now is always setting url_template.
With the removal of url_format_string, RealmFilter model will now be cleaned
with URL template checks, and the old checks for escapes are removed.
We also modified RealmFilter.clean to skip the validation when the
url_template is invalid. This avoids raising mulitple ValidationError's
when calling full_clean on a linkifier. But we might eventually want to
have a more centric approach to data validation instead of having
the same validation in both the clean method and the validator.
Fixes#23124.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit updates the logic for migrating user_topic rows
during the move-messages operation when the target topic
already has messages.
Previously, the target_topic's visibility_policy was simply
set to the original_topic's visibility_policy,
and the original_topic's visibility_policy was set to INHERIT.
This commit updates the move-messages code path to determine
the new visibility_policy depending on the visibility policies
of the original and target topics.
The target_topic's visibility_policy is then updated.
The number of db queries has increased by two:
One query corresponds to determining if 'target_topic_has_messages'.
Another query corresponds to 'get_users_with_user_topic_visibility_policy'
to determine 'target_topic_user_profile_to_visibility_policy'.
This commit refactors the move user_topic records
code block in 'do_update_message', resulting in
clean code.
We directly iterate over the dictionary items
instead of looping over the keys and fetching
values if the key exists.
Refactors instances of `message_type_name` and `message_type`
that are referring to API message type value ("stream" or
"private") to use `recipient_type_name` instead.
Prep commit for adding "direct" as a value for endpoints with a
`type` parameter to indicate whether the message is a stream or
direct message.
So far, we've used the BitField .authentication_methods on Realm
for tracking which backends are enabled for an organization. This
however made it a pain to add new backends (requiring altering the
column and a migration - particularly troublesome if someone wanted to
create their own custom auth backend for their server).
Instead this will be tracked through the existence of the appropriate
rows in the RealmAuthenticationMethods table.
If the ID of the scheduled message is passed by the client, we
edit the existing scheduled message instead of creating a new one.
However, this will soon be moved into its own API endpoint.
We previously allowed moving messages that have passed the time limit
using "change_all" value for "propagate_mode" parameter. This commit
changes the behavior to not allow moving messages (both stream and
topic edit) that have passed the time limit for non-admin and
non-moderator users.
Previously, editing topic of "(no topic)" messages was allowed
irrespective of time limit or the "edit_topic_policy" setting.
Since we are working in the direction of having "no topic" messages
feel reasonable, this commit changes the code to not consider them
as a special case and topic editing restrictions apply to them as
well now like all other messages.
We still highlight the topic edit icon in recipient bar without
hovering for "no topic" messages, but it is only shown when user
has permission to edit topics.
Improve the Notification Bot by adding a hyperlink to the new location
of a moved single message. The link will make it easier for users to
find the message in its new context.
Fixes#24604.
This commit updates the move-topic codepath to perform
bulk database operations on the UserTopic record using
user_profiles for each visibility_policy instead of
previously looping over each user_profile one by one.
This commit refactors 'set_user_topic_visibility_policy_in_database'
to perform bulk database operations and the related changes.
There is an increase in database query count because requests
to delete user_topic rows now take two queries instead of one.
This is required for logging the info for a request to delete
a non-existent user_topic row while performing bulk operations
at the same time.
The overall query count will be lower while performing
bulk operations (multiple user_profiles instead of one).
This commit updates the 'do_update_message' codepath to
update the UserTopic records regardless of visibility policy
during the "move-topic" operation.
This is required before offering new visibility policies
in the UI.
Previously, UserTopic records were moved or deleted only
for objects with a MUTED visibility policy.
Fixes: #24574
Since we have updated the registration code to use
PreregistrationRealm objects for realm creation in
previous commits, some of the code has become
redundant and this commit removes it.
We remove the following code -
- The modification to PreregistrationUser objects in
process_new_human_user can now be done unconditionally
because prereg_user is passed only during user creation
and not realm creation. And we anyway do not expect
any PreregistrationUser objects inside the realm
during the creation.
- There is no need of "realm_creation" parameter in
create_preregistration_user function, since we now
use create_preregistration_realm during realm creation.
Fixes part of #24307.
Previously, when a user moves a message to another topic, the Notification
bot will post a message saying "This topic was moved here from..." This is
confusing when the topic already contains messages. The changes aims to make
the messages more clear by changing the logic for the Notification bot. When
there is already messages in the topic, the bot will post "A message was
moved here from..." or "N messages were moved here from...". The bot will
post "This topic was moved here from (somewhere) by (someone)." when the
topic is empty.
Fixes#23267.
To avoid people calling "create_user_group" instead of
"check_add_user_group", we rename it to make its purpose clearer.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
"check_add_user_group" is a safer helper function than
"create_user_group" to use when creating user_groups. It does
error handling and notify the client with the appropriate event.
Note that the populate_db command still uses "create_user_group"
because we do not need to enqueue events at that point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Since this function creates a new user group into the database,
it is more appropriate to have it not as a generic "lib" function
but as an "action".
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We previously created RealmAuditLog entries for user notification
settings only. This commit changes the code to create entries for
all user settings. We cannot backfill the entries since we don't
have the data to do that.
When a new realm is created, a notification message is sent to
the realm configured as the settings.SYSTEM_BOT_REALM if there
is a "signups" stream that exists in that realm. This is used
for Zulip Cloud, but is an undocumented feature.
The topic of the message has been the subdomain of the new realm,
and the message content has been "Signups enabled" translated
into the default language of the new realm.
In order to make these messages more explicitly for Zulip Cloud,
the settings.CORPORATE_ENABLED is checked before sending these
messages.
To make these messages more useful, the topic for these
notifications is changed to be "new organizations". The content
of these messages is updated to have the new realm name (with a
link to the admin realm's activity support page for the realm),
subdomain (with a link to the realm), and organization type.
The RealmCount statistics will be empty if the realm was created since
the last daily aggregation. In cases where the daily stats have no
rows, it is likely fast enough to do the real count in the messages
table. This stops unduly penalizing folks who have actually sent
messages, and are just inviting people within the first day.
This commit updates 'set_user_topic_visibility_policy_in_database'
to not raise an error when deleting a UserTopic row and the user
doesn't have a visibility_policy for the topic yet, or when setting
the visibility_policy to its current value.
Also, it includes the changes to not send unnecessary events
in such cases.
Removes the notification message that was sent if a stream named
"signups" exists in the `settings.SYSTEM_BOT_REALM`. This was a
undocumented feature that would send a notification message when
a new user registered with a Zulip organization that was hosted
by an admin realm like Zulip Cloud.
This removes two database queries when a new user is created: one
to get the system bot realm and the other to get the notification
bot in said realm.
Note that there are still notification messages sent when a new
organization is registered with the admin realm if the "signups"
stream exists.
This commit refactors 'do_set_user_topic_visibility_policy'
to remove the if/else block and just have a single call to
'set_user_topic_visibility_policy_in_database'.
The branching out behaviour based on the user_topic
visibility_policy is reduced to one place, i.e.,
'set_user_topic_visibility_policy_in_database'.
This commit refactors the notify_created_user function to
call format_user_row twice with different parameters instead
of modifying the person object returned by format_user_row.
This change makes the code somewhat more easy to understand
than it was before.
dc1eeef30a made the column nullable, with the meaning for null of
"use the current `settings.INVITES_DEFAULT_REALM_DAILY_MAX`."
However, 8a95526ced switched to calling `do_change_plan_type` during
realm creation, which sets `realm.max_invites` based on the plan type,
thus ensuring that no new realms have their `_max_invites` set to
null.
Check `max_invites` instead of `_max_invites`. This requires test
adjustments for the fact that `apply_invite_realm_heuristics` is now
run.
This commit adds 'visibility_policy' as a
parameter to user_allows_notifications_in_StreamTopic
function.
This adds logic inside the user_allows_notifications_in_StreamTopic
function, to not return False when a stream is muted
but the topic is UNMUTED.
Adds a method `user_id_to_visibility_policy_dict`
to 'StreamTopicTarget' class to fetch
(user_id => visibility_policy) in single db query.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
This commit replaces 'remove_topic_mute' with
'set_user_topic_visibility_policy_in_database' and
updates it to delete UserTopic row with any configured
visibility_policy and not just muting.
In order to support different types of topic visibility policies,
this renames 'add_topic_mute' to
'set_user_topic_visibility_policy_in_database'
and refactors it to accept a parameter 'visibility_policy'.
Create a corresponding UserTopic row for any visibility policy,
not just muting topics.
When a UserTopic row for (user_profile, stream, topic, recipient_id)
exists already, it updates the row with the new visibility_policy.
In the event of a duplicate request, raises a JsonableError.
i.e., new_visibility_policy == existing_visibility_policy.
There is an increase in the database query count in the message-edit
code path.
Reason:
Earlier, 'add_topic_mute' used 'bulk_create' which either
creates or raises IntegrityError -- 1 query.
Now, 'set_user_topic_visibility_policy' uses get_or_create
-- 2 queries in the case of creating new row.
We can't use the previous approach, because now we have to
handle the case of updating the visibility_policy too.
Also, using bulk_* for a single row is not the correct way.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
Replaces 'do_unmute_topic' with 'do_set_user_topic_visibility_policy'
and associated minor changes.
This change is made to align with the plan to use a single function
'do_set_user_topic_visibility_policy' to manage
user_topic - visibility_policy changes and corresponding event
generation.
This commit is a step in the direction of having a common
function to handle visibility_policy changes and event
generation instead of separate functions for each
visibility policy.
In order to support different types of topic visibility policies,
this renames 'do_topic_mute' to 'do_set_user_topic_visibility_policy'
and refactors it to accept a parameter 'visibility_policy'.
This commit adds backend code to set email_address_visibility when
registering a new user. The realm-level default and the value of
source profile gets overridden by the value user selected during
signup.
In the case where a stream existed but had no subscribers, the error
message used to send to the owner always used `stream_name`, which
may have been None.
Switch to using `stream.name` rather than `stream_name` for this case.
We add stream_permission_group_settings object which is
similar to property_types framework used for realm settings.
This commit also adds GroupPermissionSetting dataclass for
defining settings inside stream_permission_group_settings.
We add "do_change_stream_group_based_setting" function which
is called in loop to update all the group-based stream settings
and it is now used to update 'can_remove_subscribers_group'
setting instead of "do_change_can_remove_subscribers_group".
We also change the variable name for event_type field of
RealmAuditLog objects to STREAM_GROUP_BASED_SETTING_CHANGED
since this will be used for all group-based stream settings.
'property' field is also added to extra_data field to identify
the setting for which RealmAuditLog object was created.
We will add a migration in further commits which will add the
property field to existing RealmAuditLog objects created for
changing can_remove_subscribers_group setting.
This old 300s value was meaningfully used in 2 places:
1. In the do_change_user_settings presence_enabled codepath when turning
a user invisible. It doesn't matter there, 140s is just since the
point is to make clients see this user as offline. And 140s is the
threshold used by clients (see the presence.js constant).
2. For calculating whether to set "offline" "status" in
result["presence"]["aggregated"] in get_presence_backend. It's fine
for this to become 140s, since clients shouldn't be looking at the
status value anymore anyway and just do their calculation based on
the timestamps.
Removes the initial check in `_internal_prep_message` of the length
of the message content because the `check_message` in the try block
will call `normalize_body` on the message content string, which
does a more robust check of the message content (empty string, null
bytes, length). If the message content length exceeds the value of
`settings.MAX_MESSAGE_LENGTH`, then it is truncated based on that
value. Updates associated backend test for these changes.
The removed length check would truncate the message content with a
hard coded value instead of using the value for
`settings.MAX_MESSAGE_LENGTH`.
Also, removes an extraneous comment about removing null bytes. If
there are null bytes in the message content, then `normalize_body`
will raise an error.
Note that the previous check had intentionally reduced any message over
the 10000 character limit to 3900 characters, with the code in
question dating to 2012's 100df7e349.
The 3900 character truncating rule was implemented for incoming emails
with the email gateway, and predated other features to help with
overly long messages (better stripping of email footers via Talon,
introduced in f1f48f305e, and
condensing, introduced in c92d664b44).
While we could preserve that logic if desired, it likely is no longer
a necessary or useful variation from our usual truncation rules.
This reverts commit 851d68e0fc.
That commit widened how long the transaction is open, which made it
much more likely that after the user was created in the transaction,
and the memcached caches were flushed, some other request will fill
the `get_realm_user_dicts` cache with data which did not include the
new user (because it had not been committed yet).
If a user creation request lost this race, the user would, upon first
request to `/`, get a blank page and a Javascript error:
Unknown user_id in get_by_user_id: 12345
...where 12345 was their own user-id. This error would persist until
the cache expired (in 7 days) or something else expunged it.
Reverting this does not prevent the race, as the post_save hook's call
to flush_user_profile is still in a transaction (and has been since
168f241ff0), and thus leaves the potential race window open.
However, it much shortens the potential window of opportunity, and is
a reasonable short-term stopgap.
This will allow us to re-use this logic later, when we add support for
re-checking notification settings just before sending email/push
notifications to the user.
Also, since this is essentially part of the notifiability logic,
this better belongs to `notification_data.py` and this change will
hopefully reduce the reading complexity of the message-send codepath.
This commits update the code to use user-level email_address_visibility
setting instead of realm-level to set or update the value of UserProfile.email
field and to send the emails to clients.
Major changes are -
- UserProfile.email field is set while creating the user according to
RealmUserDefault.email_address_visbility.
- UserProfile.email field is updated according to change in the setting.
- 'email_address_visibility' is added to person objects in user add event
and in avatar change event.
- client_gravatar can be different for different users when computing
avatar_url for messages and user objects since email available to clients
is dependent on user-level setting.
- For bots, email_address_visibility is set to EVERYONE while creating
them irrespective of realm-default value.
- Test changes are basically setting user-level setting instead of realm
setting and modifying the checks accordingly.
Previously, user objects contained delivery_email field
only when user had access to real email. Also, delivery_email
was not present if visibility setting is set to "everyone"
as email field was itself set to real email.
This commit changes the code to pass "delivery_email" field
always in the user objects with its value being "None" if
user does not have access to real email and real email otherwise.
The "delivery_email" field value is None for logged-out users.
For bots, the "delivery_email" is always set to real email
irrespective of email_address_visibility setting.
Also, since user has access to real email if visibility is set
to "everyone", "delivery_email" field is passed in that case
too.
There is no change in email field and it is same as before.
This commit also adds code to send event to update delivery_email
field when email_address_visibility setting changes to all the
users whose access to emails changes and also changes the code to
send event on changing delivery_email to users who have access
to email.
This commit adds time restriction on moving messages between streams
using the move_messages_between_streams_limit_seconds setting in the
backend. There is no time limit for admins and moderators.
We now use the newly added move_messages_within_stream_limit_seconds
setting to check for how long the user can edit the topic replacing
the previously used 3-day limit. As it was previously, there is no
time limit for admins and moderators.
This commit renames parse_message_content_edit_or_delete_limit
to parse_message_time_limit_setting and also renames
MESSAGE_CONTENT_EDIT_OR_DELETE_LIMIT_SPECIAL_VALUES_MAP to
MESSAGE_TIME_LIMIT_SETTING_SPECIAL_VALUES_MAP.
We do this change since this function and object will also be
used for message move limit and it makes sense to have a more
generic name.
This commit extracts a function to parse message time limit type settings
and to set it if the new setting value is None.
This function is currently used for message_content_edit_limit_seconds and
message_content_delete_limit_seconds settings and will be used for
message_move_limit_seconds setting to be added in further commits.
When 'resolve|unresolve' and 'move stream' actions occurs in
the same api call, 'This topic was marked as resolved|unresolved'
notification is not sent.
Both 'topic moved' and 'topic resolved' notification should be generated.
This commit updates the logic of when and where to send
'topic resolve|unresolve' notification. Unlike previous logic, notification
may be sent even in the case 'new_stream' is not None.
In general, 'topic resolved|unresolved' notification is sent to
'stream_being_edited'. In this particular case ('new_stream' is not None),
notification is sent to the 'new_stream' after check.
Test case is included.
Fixes: #22973
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We change the do_create_user function to use transaction.atomic
decorator instead of using with block. Due to this change, all
send_event calls are made inside transaction.on_commit.
Some other changes -
- Remove transaction.atomic decorator from send_inital_realm_messages
since it is now called inside a transaction.
- Made changes in tests which tests message events and notifications
to make sure on_commit callbacks are executed.
This commit changes the do_reactivate_user such that the complete function
is called inside an atomic transaction and events are called after the
transaction is commited using on_commit helper. This is a prep commit
for unsubscribing the bots of unaccessible private streams when reactivating
them.
A missed message email notification, where the message is the welcome
message sent by the welcome bot on account creation, get sent when
the user somehow not focuses the browser tab during account creation.
No missed message email or push notifications should be sent for the
messages generated by the welcome bot.
'internal_send_private_message' accepts a parameter
'disable_external_notifications' and is set to 'True' when the sender
is 'welcome bot'.
A check is introduced in `trivially_should_not_notify`, not to notify
if `disable_external_notifications` is true.
TestCases are updated to include the `disable_external_notifications`
check in the early (False) return patterns of `is_push_notifiable` and
`is_email_notifiable`.
One query reduced for both `test_create_user_with_multiple_streams`
and `test_register`.
Reason: When welcome bot sends message after user creation
`do_send_messages` calls `get_active_presence_idle_user_ids`,
`user_ids` in `get_active_presence_idle_user_ids` remains empty if
`disable_external_notifications` is true because `is_notifiable` returns
false.
`get_active_presence_idle_user_ids` calls `filter_presence_idle_user_ids`
and since the `user_ids` is empty, the query inside the function doesn't
get executed.
MissedMessageHookTest updated.
Fixes: #22884
This commit makes all the parameters after 'content' in
'internal_send_*', 'internal_prep_*' and '_internal_prep_*'
a mandatory keyword argument to increase code readability.
For alert words, we currently don't send email/push notifications --
only desktop notifications. Thus, we don't need to consider alert words
here, since desktop notifications do not utilize the presence status
calculated at this stage.
Tested manually that alert word desktop notifications work as expected.
When we implement email/push notifications for alert words (issues #5137
and #13127), we can add new fields like
`notifications_data.alert_word_email_notify`, similar to the existing
`notifications_data.wildcard_mention_email_notify`, which will allow us
to keep the alert word notifiability check inside the dataclass, similar
to how the mentions checks are done currently. So, even when that
feature is implemented, the code which this commit removes would be
unnecessary.
This commit renames "can_edit_topic_of_any_message" function
in models.py to "can_move_messages_to_another_topic" and
"user_can_edit_topic_of_any_message" function in settings_data.js
to "user_can_move_messages_to_another_topic".
This change is done since topic editing permission does not
depend on message sender now and messages are considered same
irrespective of whether the user who is editing the topic had sent
the message or not. This also makes the naming consistent with
what we use for the label of this setting in webapp and how we
describe this action in help documentation.
This commit changes the topic edit permssions to not depend whether the user
editing the message had sent the message or it was sent by someone else.
We only do backend changes in this commit and frontend changes will be done
in further commits.
Previously, we always allowed topic edits when the user themseleves had
sent the message not considering the edit_topic_policy and the 3-day time
limit. But now we consider all messages as same and editing is allowed only
according to edit_topic_policy setting and the time limit of 3 days in
addition for users who are not admins or moderators.
We change the topic and stream edit permssions to not depend on
allow_message_editing setting in the API and are allowed even
if allow_message_editing is set to False based on other settings
like edit_topic_policy and can_move_message_between_streams.
Fixes a part of #21739.
This solves the problem that resolving a topic with a long name (>60
characters) will cause the topic name to be truncated, and thus the edit
message code path thinks that the topic is being moved in addition to
being resolved.
We store the pre-truncation topic and use it to check against the
original topic when determining whether a topic is being moved while
getting (un)resovled or not.
Fixes#23482
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We intended to send both the "topic was resolved" and the "topic was
moved here" notification when resolving and moving a topic at the same
time in #22312.
The previous implementation did not work as expected and it was only
sending the "topic was moved here" notification.
This removes the check for old_topic and new_topic that have
RESOLVED_TOPIC_PREFIX stripped in maybe_send_resolve_notifications, so
that the notification will be sent regardless if the topic name without
the prefix stays the same or not.
Note that weird topic handling ("✔ ✔✔ some topic") in the comments
was added in e231a03eff is unaffected. In case of confusion, the lstrip
check is not essential to detecting topic being unresolved/resolved.
As we mainly have that handled in the latter part of the helper.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is a follow-up to d201229df8.
do_get_invites_controlled_by_user queries for Confirmations when finding
multiuse invites controlled by a user. This means that a revoked
multiuse invite cannot really be fetched here, because
do_revoke_multi_use_invite deletes the Confirmation object when revoking
the invitations. However, having a defensive assert here should be
useful to make this doesn't secretly break in the future if the query
used changes or if there are unexpected revoked multiuse invites with an
existing Confirmations for any (buggy) reason.
This allows us to revoke MultiUseInvites by changing their .status
instead of deleting them (which has been deleting the helpful tracking
information on PreregistrationUsers about which MultiUseInvite they came
from).
We do not create historical UserMessage rows, for messages that didn't
have one, while marking messages as read and simply ignore those messages.
We do so because there is no user of creating UserMessage rows and it just
wastes storage.
Note that we still allow to mark messages from unsubscribed streams as
read but only those which have UserMessage rows for them to handle the
case when the unread messages were not marked as read while unsubscribing
from the stream due to some race condition. In such cases, messages
will not be included in the unread count shown in "All messages" menu
(and stream is anyways not present in the left sidebar), but the message
border on the left is green if viewing the stream after unsusbcribing it.
So, to avoid the confusion for users, the messages will be marked as read
when user scrolls down.
Zulip's unread messages design has an invariant that all unread stream
messages must be in streams the user is subscribed to. For example, We
do not include the unread messages from unsubscribed streams in the
"unread_msgs" data structure in "/register" response and we mark all
unread messages as read when unsubscribing a user from a stream.
Previously, the mark as unread endpoint allowed violating that
invariant, allowing you to mark messages in any stream as unread.
Doing so caused the "message_details" data structures sent with
"update_message_flags" events to not contain messages from
unsubscribed streams, even though those messages were present in the
set of message IDs. These malformed events, in turn, caused exceptions
in the frontend's processing of such an event.
This change is paired with a separate UI change to not offer the "Mark
as unread" feature in such streams; with just this commit, that will
silently fail.
With some additions to the tests by tabbott.
This guarantees that the Realm is always non-None when we hit the
codepath is_static_or_current_realm_url via
do_change_stream_description, so that we can properly skip rewritting
some images.
Fixes#19405
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
I don't think this is used anywhere outside of tests, but we should have
this logic correct. If this function is used to send a message from a
user to a cross-realm bot, the message.realm should be the realm of the
user.
In the normal case, where a user send a message to a cross-realm bot
through the API is already handled correctly, this bug is unrelated.
Previously we did not send notification for topic-only edits.
Now, we add backend support for sending notification to topic-only
edits as well.
We would add support for this in webapp in further commits since
message edit UI will be updated as well. We just make sure that no
notifications are sent when editing topic using pencil icon in
message header.
We also change the API default for moving a topic to only notify the
new location, not the old one; this matches the current defaults in
the web UI.
Includes many tests.
We also update the puppeteer tests to test only content edit as
we are going to change the UI to not allow topic editing from
message edit UI. Also fixing the existing tests to pass while
doing topic edits is somewhat complex as notification message
is also sent to new topic by default.
Fixes#21712.
Co-authored-by: Aman Agrawal <amanagr@zulip.com>
Co-authored-by: Tim Abbott <tabbott@zulip.com>
The previous commit did this for revoking sessions. send_events should
be handled similarly too, to correctly handle calling do_deactivate_user
inside a transaction.
This commit adds the OPTIONAL .realm attribute to Message
(and ArchivedMessage), with the server changes for making new Messages
have this set. Old Messages still have to be migrated to backfill this,
before it can be non-nullable.
Appropriate test changes to correctly set .realm for Messages the tests
manually create are included here as well.
zerver/migrations/0240_usermessage_migrate_bigint_id_into_id.py needs
to be updated to account for Django 4.1 creating AutoField as an
identity column rather than a serial column.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Previously, we included all three message edit related settings
("allow_message_editing", "message_content_edit_limit_seconds" and
"edit_topic_policy") in the event data and api response irrespective
of which of these settings were changed. Now, we only include changed
settings and separate events are sent for each setting if more than
one of them is changed.
Note that the previous typed in event_schema.py for
`message_content_edit_limit_seconds` incorrectly did not allow `None`
as a value, which is used to encode no limit.
This refactors and renames user_ids_muting_topic to accept a parameter
'visibility_policy' and fetch user IDs that have a specific
visibility_policy(provided as the parameter) set for a topic.
Fourth step in making user status `away` a deprecated way to access
`presence_enabled` for clients supporting older servers, and
checkpoint commit prior to deleting the `status` field from the
UserStatus model.
Part of transitioning from 'unavailable' user status feature to
'invisible mode' user presence feature.
When a user toggles a status update for `away=True|False`, we now update
their `presence_enabled` setting to match (`away!=presence_enabled`).
First step of making user status `away` updates a deprecated way to
access presence_enabled for clients supporting older servers, and
checkpoint commit before migrating users with a current UserStatus
of `status=AWAY` to have their `presence_enabled` set to `False`.
Note that when user status `away` is updated, we now send 4 events:
user_status, user_settings, presence, and update_global_notifications.
Also, this means that these updates change the UserPresence.status
value, which impacts the test for importing and exporting user
information.
Part of transitioning from 'unavailable' user status feature to
'invisible mode' user presence feature.
We need to move this function to a separate actions file specifically
for `user_status` because otherwise we will have a circular import
between `actions/user_settings.py` and `actions/presence.py` in an
upcoming commit.
Prep commit for migrating "unavailable" user status feature to
"invisible" user presence feature.
To allow `custom_profile_field` to display in user profile popover,
added new boolean field "display_in_profile_summary" in its model class.
In `custom_profile_fields.py`, functions are edited as per conditions,
like currently we can display max 2 `custom_profile_fields` except
`LONG_TEXT` and `USER` type fields.
Default external account custom profile fields made updatable for only
this new field, as previous they were not updatable.
Fixes part of: #21215
This commit adds do_change_can_remove_subscriber_group function for
changing can_remove_subscribers_group field of a stream. We also add
can_remove_subscribers_group_id field to stream and subscription
objects.
This function will be helpful for writing tests in next commit.
We would add API and UI support to change this setting in further
commits.
Previously, an active production Zulip server would experience a class
of deadlocks caused by two or more concurrent bulk update operations
on the UserMessage table.
This is because UPDATE ... SET ... WHERE statements that execute in
parallel take row-level UPDATE locks as they get results; since the
query plans may result in getting rows in different orders between two
queries, this can result in deadlocks.
Some databases allow ORDER BY on their UPDATE ... WHERE statements;
PostgreSQL does not. In PostgreSQL, the answer is to do a sub-select
with an ORDER BY ... FOR UPDATE to ensure consistent ordering on row
locks.
We do this all code paths using bitand or bitor as part of bulk
editing message flags, which should ensure that these concurrent
operations obtain row level locks on the table in the same order.
Fixes#19054.
This is preparatory commit for #18941.
Importing `do_delete_message` from `message_edit.py` was causing a
circular import error. In order to avoid that, we create a separate
message_delete.py file which has all the functions related to deleting
messages.
The tests for deleting messages are present in
`zerver/tests/test_message_edit.py`.
Fixes a part of #18941
In zerver.management.commands.logout_all_users,
we pass a values queryset containing the ids into
this function, which is not actually a list. This
broadens the type annotation so that the ValuesQuerySet
is accepted.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This uses a more specific type `_StrPromise` to replace `Promise`
providing typing information for lazy translation strings.
In places where the callee evaluates the `_StrPromise` object in all
cases we simply force the evaluation with `str()`. This includes
`JsonableError` that ends up handled by the error handler middleware,
and `internal_send_stream_message` that depends on `check_stream_topic`,
requiring the `topic` to be evaluated anyway. In other siuations, the
callee is expected to be able to handle `StrPromise` explicitly.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>