Updates the realm field `default_code_block_language` to have a default
value of an empty string instead of None. Also updates the web-app to
check for the empty string and not `null` to indicate no default is set.
This means that both new realms and existing realms that have no default
set will have the same value for this setting: an empty string.
Previously, new realms would have None if no default was set, while realms
that had set and then unset a value for this field would have an empty
string when no default was set.
Expands support for the message ID operand for id" operator to be either
a string or an integer. Previously, this operand was always validated as
a string.
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We used to access the complete objects for UserProfile foreign
keys like "bot_owner" and "default_sending_stream", where we only
needed ID of them.
This commit fixes some of such instances and now we directly get
the id using "bot_owner_id" and "default_sending_stream_id" so
that we can avoid the unnecessary complexity of accessing the
complete object.
This commit updates code to pass "realm" and "bot_owner" args to
select_related call in get_users. We pass "realm" and "bot_owner"
args to get_users because the caches which this function is used
to populate are used for get_user and get_user_profile_by_api_key
functions and they also select both these fields when querying for
UserProfile objects.
This commit updates the select_related calls in queries to
get UserProfile objects in get_user function called in
management commands to pass "realm" as argument to
select_related call.
There are some management commands like deactivate_user,
change_full_name, etc. which might need fields like
"default_sending_stream" when changing full name of a bot
or something similar, but we don't think that would happen
often and we can afford to have a DB round trip to get
these fields if needed.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are
selected in case we add more foreign keys to UserProfile
in future.
This commit updates select_related call to pass "realm" as
argument in select_related call in fetch_users_by_id function
as we only require realm for the UserProfile objects fetched
using fetch_users_by_id.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
We do not use any related fields for the UserProfile objects
fetched by get_active_users, so we can simply remove the
select_related call.
The user object from get_active_users was used to get realm
but since get_active_users called from a realm object we can
directly use that realm object. This change also leads to
some changes in the cache code where we now pass the realm
to the function instead of selecting it from UserProfile object.
This commit removes select_related call from
get_soft_deactivated_users_for_catch_up as
we do not use any related fields for the
UserProfile objects fetched using this call.
Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
‘blocklist’ was added in 0.0.35 (with backwards compatibility for the
old name), and type annotations were added in 0.0.91 (with only the
new name).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This code removes a lot of complexity with very likely
positive overall impact on system performance and
negligible downside.
We already cache display recipients on a per-user
level, so there's no need for another cache layer on
top of that that keys them with recipient ids.
We avoid strange things where Alice/Bob and Bob/Charlie
get put into the top layer cache and then we still have
a cache miss on Alice/Charlie despite the lower level
cache being able to support per-user lookups.
This change does introduce an extra database round trip
if any of our messages have a huddle, but the query is
extremely cheap, and we can always try to cache that
function more directly or try to re-use some of our
other huddle-based caches.
As part of this, we clean up the names for the
lower-level per-user cache of display recipients, and
we simplify the cache keys.
We also stop passing in a full Recipient object to the
`bulk_get_huddle_user_ids` functions.
The local impact of this change should be easy to
measure (at least approximately), since we use this
function every time a user gets messages via the
/messages endpoint.
The only overlap between how we fetched streams and
users was to share some really complicated data
structures.
We can also short-circuit some logic if a message
batch is either all-stream or all-DM.
We restrict the columns, avoid quadratic looping,
and don't bother with order_by.
We also return the user ids (per recipient) as
sets, since that's how the only caller uses the
info (albeit implicitly via set.union accepting
a list).
It’s unclear what was supposed to be “safe” about this wrapper. The
hashlib API is fine without it, and we don’t want to encourage further
use of SHA-1.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
- Adds `message_handle_match` function to handle new pattern for
relative help links to "Drafts" and "Scheduled messages" for logged-in
users: `{relative|message|drafts}` and `{relative|message|scheduled}`.
`remove_denormalized_recipient_column_from_data` removes the
`recipient` data from `zerver_userprofile`, but did not remove it from
`zerver_userprofile_mirrordummy`, which was later appended to the list
of `zerver_userprofile` objects. This led to failure when inserting,
as the mirrordummy objects still tried to reference their previous
`recipient_id`s.
Move the merging of the two sets earlier, before we call
`remove_denormalized_recipient_column_from_data`.
If there are two huddles, with users A + B + C + D and A + B + C, and
user D is deleted, it is replaced with a mirrordummy user. If
mirrordummy subscriptions are not included in exports, then the two
huddles have duplicate member sets, and will not be able to be
imported successfully.
Include huddle subscriptions for mirrordummy users in exports.
The long-term idle topic participants are soft-reactivated
after email/push notifications are sent due to @topic mention.
The reason being that, generally, @topic mentions are going to
reach a small set of users who have a decent chance of being
reactivated by the notifications.
This commit completes the notifications part of the @topic
wildcard mention feature.
Notifications are sent to the topic participants for the
@topic wildcard mention.
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
The active realm emoji are just a subset of all your
realm emoji, so just use a single cache entry per
realm.
Cache misses should be very infrequent per realm.
If a realm has lots of deactivated realm emoji, then
there's a minor expense to deserialize them, but that
is gonna be dwarfed by all the other more expensive
operations in message-send.
I also renamed the two related functions. I erred on
the side of using somewhat verbose names, as we don't
want folks to confuse the two use cases. Fortunately
there are somewhat natural affordances to use one or
the other, and mypy helps too.
Finally, I use realm_id instead of realm in places
where we don't need the full Realm object.
This migration is reasonably complex because of various anomalies in existing
data.
Note that there are cases when extra_data does not contain data that is
proper json with possibly single quotes. Thus we need to use
"ast.literal_eval" to cover that.
There is also a special case for "event_type == USER_FULL_NAME_CHANGED",
where extra_data is a plain str. This event_type is only used for
RealmAuditLog, so the zilencer migration script does not need to handle
it.
The migration does not handle "event_type == REALM_DISCOUNT_CHANGED"
because ast.literal_eval only allow Python literals. We expect the admin
to populate the jsonified extra_data for extra_data_json manually
beforehand.
This chunks the backfilling migration to reduce potential block time.
The migration for zilencer is mostly similar to the one for zerver; except that
the backfill helper is added in a wrapper and unrelated events are
removed.
**Logging and error recovery**
We print out a warning when the extra_data_json field of an entry
would have been overwritten by a value inconsistent with what we derived
from extra_data. Usually this only happens when the extra_data was
corrupted before this migration. This prevents data loss by backing up
possibly corrupted data in extra_data_json with the keys
"inconsistent_old_extra_data" and "inconsistent_old_extra_data_json".
More roundtrips to the database are needed for inconsistent data, which are
expected to be infrequent.
This also outputs messages when there are audit log entries with decimals,
indicating that such entries are not backfilled. Do note that audit log
entries with decimals are not populated with "inconsistent_old_extra_data_*"
in the JSONField, because they are not overwritten.
For such audit log entries with "extra_data_json" marked as inconsistent,
we skip them in the migration. Because when we have discovered anomalies in a
previous run, there is no need to overwrite them again nesting the extra keys
we added to it.
**Testing**
We create a migration test case utilizing the property of bulk_create
that it doesn't call our modified save method.
We extend ZulipTestCase to support verifying console output at the test
case level. The implementation is crude but the use case should be rare
enough that we don't need it to be too elaborate.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This tracks user group membership changes when the realm is first set
up, either through an import or not. This happens when we add users to
the system user groups by their roles.
For an imported realm, we do extra handling when the data doesn't include
user groups. This gets audited as well.
Django seems to have an aggressive check on the type of a field when
setting it through an relation, requiring the argument to be a UserGroup in
our case.
Reference:
02966a30dd/django/db/models/base.py (L537-L546)
The MissedMessage queue worker is the single callsite of
`handle_missedmessage_emails`, which immediately transforms the list
of events into a dict keyed by message-id.
Skip the intermediate list step, and use defaultdict and a dataclass
to simplify and make explicit the pieces. This removes the unused
user_profile_id and message_id pieces of the data structure.
This commit adds a boolean field `mentions_topic_wildcard`
to the `MessageRenderingResult` dataclass.
The field is set to true only if message rendering determines
the message has an actual topic wildcard mention in it (and not,
e.g., topic wildcard mention syntax inside a code block).
The rendered content for topic wildcard mention is
'<span class="topic-mention">{wildcard}</span>'.
The 'topic-mention' class is the identifier for the wildcard
mention being a topic wildcard mention.
We don't use 'data-user-id="*"' and "user-mention" class for
topic wildcard mentions and eventually plan to remove them for
stream wildcard mentions too in a separate mini-project.
This commit adds the 'topic_wildcard_mention_user_ids' and
'topic_wildcard_mention_in_followed_topic_user_ids'
attributes to the 'RecipientInfoResult' dataclass.
Only topic participants are notified of @topic mentions.
Topic participants are anyone who sent a message to a topic
or reacted to a message on the topic.
'topic_wildcard_mention_in_followed_topic_user_ids' stores the
ids of the topic participants who follow the topic and have
enabled the wildcard mention notifications for followed topics.
'topic_wildcard_mention_user_ids' stores the ids of the topic
participants for whom 'user_allows_notifications_in_StreamTopic'
with setting 'wildcard_mentions_notify' returns True.
This commit adds a 'has_topic_wildcards' instance variable
to the 'MentionData' class for the detection of
- possible topic wildcards mentions.
Fixes part of #22829.
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
Co-authored-by: orientor <aditya.verma@students.iiit.ac.in>
This includes changing the URL to #settings/preferences, with a
transparent redirect so that existing links, like the one from Welcome
Bot, continue to work.
Pass the HttpRequest explicitly through the two webhooks that log to
the webhook loggers.
get_current_request is now unused, so remove it (in the same commit
for test coverage reasons).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mostly the same as tracking subgroup changes, except that now
modified_user_group is the subgroup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It's worth noting that instead of adding another field to the
RealmAuditLog model, we store the modified subgroup ids in extra_data as
a JSON encoded dict with the key "subgroup_ids". We don't create audit
log entries for supergroup changes at this point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This also add audit log entries during user creation and role change,
because we modify system group memberships there.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We also create RealmAuditLog entries for the initial memberships that
get added along with the creation of a UserGroup. System user groups are
not created with members so no audit logs are populated for that.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Basically, I eliminate the use of select_all() in a query
that still makes a single round trip. We have good test
enforcement that Django never needs to lazily fetch
objects off the Stream object. (It used to be common
to fetch stream.realm a while back, but we upgraded
bulk_add_subscription, in particular, a while back.)
At least as measured by test_events.py, which has over 1000
calls to fetch initial data for page loads, this should
be about a 10% improvement in how much time the server
spends fetching data.
We mostly avoid a select_related() query that did this nastiness:
INNER JOIN "zerver_realm" ON ("zerver_stream"."realm_id" = "zerver_realm"."id")
INNER JOIN "zerver_usergroup" ON ("zerver_stream"."can_remove_subscribers_group_id" = "zerver_usergroup"."id")
INNER JOIN "zerver_realm" T4 ON ("zerver_usergroup"."realm_id" = T4."id")
INNER JOIN "zerver_usergroup" T5 ON ("zerver_usergroup"."can_mention_group_id" = T5."id")
INNER JOIN "zerver_realm" T6 ON (T5."realm_id" = T6."id")
INNER JOIN "zerver_usergroup" T7 ON (T5."can_mention_group_id" = T7."id")
INNER JOIN "zerver_realm" T8 ON (T7."realm_id" = T8."id")
INNER JOIN "zerver_usergroup" T9 ON (T7."can_mention_group_id" = T9."id")
INNER JOIN "zerver_realm" T10 ON (T9."realm_id" = T10."id")
INNER JOIN "zerver_usergroup" T11 ON (T9."can_mention_group_id" = T11."id")
WHERE "zerver_stream"."id" IN (SELECT U0."stream_id" FROM "zerver_defaultstream" U0 WHERE U0."realm_id" = 2
Future commits will address the codepath for creating users.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
Updates find_proper_insertion_index to check for the inline image
classes as matching at least one of the classes in the element's
attrib["class"] so that cases where an inline preview image has
multiple classes, like YouTube video previews, will have the
correct insertion index.
Fixes#26186.
By relocating helper methods into a mixin class, we can be more flexible
with managing transactions in test cases, without always forcing the
django.test.TestCase behavior of always putting the test case into an
atomic transaction.
We include a check for side effects in ZulipTransactionTestCase. It only
checks for the set of row ids in all tables before and after each test.
It is not a comprehensive check for side effects, but should be
sufficient for the basics without much performance overhead.
This prep commit replaces the 'wildcard' keyword in the codebase
with 'stream_wildcard' at some places for better readability, as
we plan to introduce 'topic_wildcards' as a part of the
'@topic mention' project.
Currently, 'wildcards = ["all", "everyone", "stream"]' which is an
alias to mention everyone in the stream, hence better renamed as
'stream_wildcards'.
Eventually, we will have:
'stream_wildcard' as an alias to mention everyone in the stream.
'topic_wildcard' as an alias to mention everyone in the topic.
'wildcard' refers to 'stream_wildcard' and 'topic_wildcard' as a whole.
The 'get_gcm_alert' and 'get_apns_alert_subtitle' functions
don't include the case when the trigger is
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
This commit updates the functions to include
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
This commit fixes the incorrect calculation of the
'senders' list.
The effect of 'followed_topic_wildcard_mention'
wasn't considered earlier.
The bug was introduced in b052c8980e.
This commit uses 'NotificationTriggers' class attributes
instead of directly using loose strings.
This should have been ideally included in the commit
c3319a5231.
This commit adds code to include can_mention_group_id field to
UserGroup objects passed with response of various endpoints
including "/register" endpoint and also in the group object
send with user group creation event.
Fixes a part of #25927.
This commit adds backend code to check whether user has permission
to mention a group while sending message as per the can_mention_group
setting of the group.
Fixes a part of #25927.
We now upstream the conversion of legacy tuples
into the callers of do_events_register. For the
codepath that builds the home view, this allows
for cleaner code in the caller. For the /register
endpoint, we have to do the conversion, but that
isn't super ugly, as that's an appropriate place
to deal with legacy formats and clean them up.
We do have to have do_events_register downgrade
the format back to tuples to pass them into
request_event_queue, because I don't want to
change any serialization formats. The conversion
is quite simple, and it has test coverage.
This is a first step toward two goals:
* support dictionary-like narrows when registering events
* use readable dataclasses internally
This is gonna be a somewhat complicated exercise due to how
events get serialized, but fortunately this interim step
doesn't require any serious shims, so it improves the codebase
even if the long-term goals may take a while to get sorted
out.
The two places where we have to use a helper to convert narrows
from tuples to dataclasses will eventually rely on their callers
to do the conversion, but I don't want to re-work the entire
codepath yet.
Note that the new NarrowTerm dataclass makes it more explicit
that the internal functions currently either don't care about
negated flags or downright don't support them. This way mypy
protects us from assuming that we can just add negated support
at the outer edges.
OTOH I do make a tiny effort here to slightly restructure
narrow_filter in a way that paves the way for negation support.
The bigger goal by far, though, is to at least support the
dictionary format.
We no longer pass in a big opaque event to narrow_filter
(which is inside build_narrow_filter). We instead explicitly
pass in message and flags. This leads to a bit more type
safety, and it's also more flexible. There's no reason to
build an entire event just to see if a message belongs to
a narrow.
The changes to the test work around the fact that the fixtures
are sloppy with types. I plan a subsequent commit to clean
up those tests significantly.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the AlertWord
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update realm_alert_words_cache_key,
realm_alert_words_automaton_cache_key, and flush_realm_alert_words
functions to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by these functions and there
is no need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the Attachment
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update
get_realm_used_upload_space_cache_key function to accept
realm_id as parameter instead of realm object, so that
the code for flushing the cache works even after the
realm is deleted. This change is fine because eventually
only realm_id is used by this function and there is no
need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the UserProfile
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update bot_dicts_in_realm_cache_key
function to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by this function and there is
no need of the complete realm object.
Make the import of `Realm`, `Stream` and `UserGroup` objects be
done in single transaction, to make the import process in general
more atomic.
This also removes the need to temporarily unset the Stream references
on the Realm object. Since Django creates foreign key constraints
with `DEFERRABLE INITIALLY DEFERRED`, an insertion of a Realm row can
reference not-yet-existing Stream rows as long as the row is created
before the transaction commits.
Discussion - https://chat.zulip.org/#narrow/stream/101-design/topic/New.20permissions.20model/near/1585274.
This commit adds default_group_name field to GroupPermissionSetting
type which will be used to store the name of the default group for
that setting which would in most cases be one of the role-based
system groups. This will be helpful when we would have multiple
settings and we would need to set the defaults while creating
realm and streams.
For tests that use the dev server, like test-api, test-js-with-puppeteer,
we don't have the consumers for the queues. As they eventually timeout,
we get unnecessary error messages. This adds a new flag, disable_timeout,
to disable this behavior for the test cases.
The `tabbed_instructions` widget used for both language toggles in our
API documentation and app toggles in our Help Center documentation
misleadingly calls the identifier for the tab `language` in local
variables and its interface.
- Renames local variables `language` -> `tab_key`.
- Renames HTML data attributes `data-language` -> `data-tab-key`.
Fixes#24669.
This is primarily to prevent impersonation, such as `zulipteam`. We
only enable these protections for CORPORATE_ENABLED, since `zulip` is
a reasonable test name for self-hosters.
9d97af6ebb addressed the one major source of inconsistent data which
would be solved by simply re-attempting the ScheduledEmail row. Every
other instance that we have seen since then has been a corrupt or
modified database in some way, which does not self-resolve. This
results in an endless stream of emails to the administrator, and no
forward progress.
Drop this to a warning, and make it remove the offending row. This
ensures we make forward progress.
This commit makes it possible for users to control the wildcard
mention notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the push notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the email notifications for messages sent to followed topics
via a global notification setting.
Although there is no support for configuring this setting
through the UI yet.
Add five new fields to the UserBaseSettings class for
the "followed topic notifications" feature, similar to
stream notifications. But this commit consists only of
the implementation of email notifications.
Failing to remove all of the rules which were added causes action at a
distance with other tests. The two methods were also only used by
test code, making their existence in zerver.lib.rate_limiter clearly
misplaced.
This fixes one instance of a mis-balanced add/remove, which caused
tests to start failing if run non-parallel and one more anonymous
request was added within a rate-limit-enabled block.
The user group depedency graph should always be a DAG.
This commit adds code to make sure we keep the graph DAG
while adding subgroups to a user group.
Fixes#25913.
We want to make sure that the system groups, once created, will always
have the GroupGroupMemberships fully set up.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Note that we use the DjangoJSONEncoder so that we have builtin support
for parsing Decimal and datetime.
During this intermediate state, the migration that creates
extra_data_json field has been run. We prepare for running the backfilling
migration that populates extra_data_json from extra_data.
This change implements double-write, which is important to keep the
state of extra data consistent. For most extra_data usage, this is
handled by the overriden `save` method on `AbstractRealmAuditLog`, where
we either generates extra_data_json using orjson.loads or
ast.literal_eval.
While backfilling ensures that old realm audit log entries have
extra_data_json populated, double-write ensures that any new entries
generated will also have extra_data_json set. So that we can then safely
rename extra_data_json to extra_data while ensuring the non-nullable
invariant.
For completeness, we additionally set RealmAuditLog.NEW_VALUE for
the USER_FULL_NAME_CHANGED event. This cannot be handled with the
overridden `save`.
This addresses: https://github.com/zulip/zulip/pull/23116#discussion_r1040277795
Note that extra_data_json at this point is not used yet. So the test
cases do not need to switch to testing extra_data_json. This is later
done after we rename extra_data_json to extra_data.
Double-write for the remote server audit logs is special, because we only
get the dumped bytes from an external source. Luckily, none of the
payload carries extra_data that is not generated using orjson.dumps for
audit logs of event types in SYNC_BILLING_EVENTS. This can be verified
by looking at:
`git grep -A 6 -E "event_type=.*(USER_CREATED|USER_ACTIVATED|USER_DEACTIVATED|USER_REACTIVATED|USER_ROLE_CHANGED|REALM_DEACTIVATED|REALM_REACTIVATED)"`
Therefore, we just need to populate extra_data_json doing an
orjson.loads call after a None-check.
Co-authored-by: Zixuan James Li <p359101898@gmail.com>
This in-progress feature was started in 2018 and hasn't
been worked on much since. It's already in a broken state,
which makes it hard to iterate on the existing search bar
since it's hard to know how those changes will affect search
pills.
We do still want to add search pills eventually, and when
we work on that, we can refer to this diff to readd the
changes back.
An implicit coercion from an untyped dict to the TypedDict was hiding
a type error: CapturedQuery.sql was really str, not bytes. We should
always prefer dataclass over TypedDict to prevent such errors.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Twitter removed their v1 API. We take care to keep the existing cached
results around for now, and to not poison that cache, since we might
be able replace this with something that can still use the existing
cache.
This commit removes realm_community_topic_editing_limit_seconds
field from register response since topic edit limit is now
controlled by move_messages_within_streams_limit_seconds
setting.
We also remove DEFAULT_COMMUNITY_TOPIC_EDITING_LIMIT_SECONDS
constant since it is no longer used.
This prevents `get_user_profile_by_api_key` from doing a sequential
scan.
Doing this requires moving the generation of initial api_key values
into the column definition, so that even bare calls to
`UserProfile.objects.create` (e.g. from tests) call appropriately
generate a random initial value.
Deleting a message can race with sending a push notification for it.
b47535d8bb handled the case where the Message row has gone away --
but in such cases, it is also possible for `access_message` to
succeed, but for the save of `user_message.flags` to fail, because the
UserMessage row has been deleted by then.
Take a lock on the Message row over the accesses of, and updates to,
the relevant UserMessage row. This guarantees that the
message's (non-)existence is consistent across that transaction.
Partial fix for #16502.
We now set tos_version to "-1" for imported users and the ones
created using API or using other methods like LDAP, SCIM and
management commands. This value will help us to allow users to
change email address visibility setting during first login.
With the private messages -> direct messages migration, we should
rename the "Starting a new private thread" help center article.
- Renames article to "Starting a new direct message"
- Updates relevant section in /help/getting-started-with-zulip
- Fixes typo in /help/send-group-dm
- Updates file names and adds URL redirect.
Fixes#25506.
Backfill subscription realm audit log SUBSCRIPTION_CREATED events for
users which are currently subscribed but don't have any subscription
events, presumably due to some historical bug. This is important
because those rows are necessary when reactivating a user who is
currently soft-deactivated.
For each stream, we find the subscribed users who have no
subscription-related realm audit log entries, and create a
`backfill=True` subscription audit log entry which is the latest it
could have been, based on UserMessage rows. We then optionally insert
a `DEACTIVATION` if the current subscription is not active.
We were missing a few checks for raw_unread_msgs being present before
trying to parse and update it.
The test only covers 2/3 of the cases, but I wasn't convinced it was
worth adding another test just for the corner case of removing a
message flag; this seems fairly unlikely to regress.
Previously, it seemed possible for the scheduled messages API to try
to send infinite copies of a message if we had the very poor luck of a
persistent failure happening after a message was sent.
The failure_message field supports being able to display what happened
in the scheduled messages modal, though that's not exposed to the API
yet.
The comment was outdated, currently we import UserProfiles before
realm_tables - because some models in realm_tables have a dependency on
UserProfile.
Also makes sense to elaborate a bit more in the comment that it's just
an outline of the ordering, not an exhaustive list.
Fixes#25414.
We add Attachment.scheduled_messages relation to track ScheduledMessages
which reference the attachment.
The import bits can be done after merging this, by updating #25345.
Because education organizations and users have slightly specialized
use cases, we update the Welcome Bot message content sent to new
users and new organization owners for these types of organizations
to link to help center articles/guides geared toward these users
and organizations.
Also, updates the demo organization warning to only go to the new
demo organization owner because the 30 day deletion text is only
definitely accurate when the organization is created.
Fixes#21694.
Updates the objects in the API for scheduled messages so that those
for stream messages return the `to` property as an integer since it
is always the unique stream ID and so that those for direct messages
do not have a `topic` property since direct messages never have a
topic.
Also makes small update so that web app scheduled messages overlay
has the correct stream ID.