The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
The followup_day2 email is scheduled with a delay as a welcome email
and is therefore more likely to exist as a scheduled email in these
deactivation cases.
Updates comment to not include the number of emails generated so
that it doesn't need to be updated every time a new email is added.
The current count in the comment is already out-of-date.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
This changes bulk_get_streams so that it just uses the
database all the time. Also, we avoid calling
select_related(), so that we just get back thin and
tidy Stream objects with simple queries.
About not caching any more:
It's actually pretty rare that we fetch streams by name
in the main application. It's usually API requests that
send in stream names to find more info about streams.
It also turns out that for large queries (>= ~30 rows
for my measurements) it's more efficent to hit the
database than memcached. The database is super fast at
scale; it's just the startup cost of having Django
construct the query, and then having the database do
query planning or whatever, that slows us down. I don't
know the exact bottleneck, but you can clearly measure
that one-row queries are slow (on the order of a full
millisecond or so) but the marginal cost of additional
rows is minimal assuming you have a decent index (20
microseconds per row on my droplet).
All the query-count changes in the tests revolve around
unsubscribing somebody from a stream, and that's a
particularly odd use case for bulk_get_streams, since
you generally unsubscribe from a single stream at a
time. If there are some use cases where you do want to
unsubscribe from multiple streams, we should move
toward passing in stream ids, at least from the
application. And even if we don't do that, our cost for
most queries is a couple milliseconds.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This commit creates separate events for issue milestoned and
demilestoned notifications. This allows the end-users to choose
whether they want these notifications or not.
Fixes#25793.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This add audit log entries when the name or description of a user group
is updated. We store both the old and new values in extra_data. We wrap
the functions inside an atomic transaction so that the audit logs and
the updates are committed together.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mostly the same as tracking subgroup changes, except that now
modified_user_group is the subgroup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It's worth noting that instead of adding another field to the
RealmAuditLog model, we store the modified subgroup ids in extra_data as
a JSON encoded dict with the key "subgroup_ids". We don't create audit
log entries for supergroup changes at this point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This also add audit log entries during user creation and role change,
because we modify system group memberships there.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We also create RealmAuditLog entries for the initial memberships that
get added along with the creation of a UserGroup. System user groups are
not created with members so no audit logs are populated for that.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This helps reduce the impact on busy uwsgi processes in case there are
slow timeout failures of Sentry servers. The p99 is less than 300ms,
and p99.9 per day peaks at around 1s, so this will not affect more
than .1% of requests in normal operation.
This is not a complete solution (see #26229); it is merely stop-gap
mitigation.
Various cleanups:
* clean up comments
* improve names for constants and variables
* express first ORM query as a single statement
* use set differences to simplify logic
* avoid all the reversing churn
* avoid early-exit idiom since this function is so small
Note that it's plausible that we should just combine the two
queries and let the database exclude the already-used ids,
but that felt a little risky for now. As I mentioned on
Zulip, I think the one-week window has dubious value, but
I am biased by having wasted time chasing down a test
flake related to the time window.
Basically, I eliminate the use of select_all() in a query
that still makes a single round trip. We have good test
enforcement that Django never needs to lazily fetch
objects off the Stream object. (It used to be common
to fetch stream.realm a while back, but we upgraded
bulk_add_subscription, in particular, a while back.)
We extract code from process_new_human_user with
no modifications.
This has all the best outcomes of extracting a function:
* better profile info
* easier to test for query counts (signup gets real noisy)
* simplifies a long, messy function
It has no real drawbacks, since the helper function doesn't need
to pass back any intermediate state to the parent for the rest
of what the parent does.
When you profile test_signup and test_invite, with a decent
sample size, the set_up_streams_for_new_human_user function
does about 20% of the work for process_new_human_user, which
is a lot considering that most tests don't create a ton of
pre-registered or default streams.
At least as measured by test_events.py, which has over 1000
calls to fetch initial data for page loads, this should
be about a 10% improvement in how much time the server
spends fetching data.
We mostly avoid a select_related() query that did this nastiness:
INNER JOIN "zerver_realm" ON ("zerver_stream"."realm_id" = "zerver_realm"."id")
INNER JOIN "zerver_usergroup" ON ("zerver_stream"."can_remove_subscribers_group_id" = "zerver_usergroup"."id")
INNER JOIN "zerver_realm" T4 ON ("zerver_usergroup"."realm_id" = T4."id")
INNER JOIN "zerver_usergroup" T5 ON ("zerver_usergroup"."can_mention_group_id" = T5."id")
INNER JOIN "zerver_realm" T6 ON (T5."realm_id" = T6."id")
INNER JOIN "zerver_usergroup" T7 ON (T5."can_mention_group_id" = T7."id")
INNER JOIN "zerver_realm" T8 ON (T7."realm_id" = T8."id")
INNER JOIN "zerver_usergroup" T9 ON (T7."can_mention_group_id" = T9."id")
INNER JOIN "zerver_realm" T10 ON (T9."realm_id" = T10."id")
INNER JOIN "zerver_usergroup" T11 ON (T9."can_mention_group_id" = T11."id")
WHERE "zerver_stream"."id" IN (SELECT U0."stream_id" FROM "zerver_defaultstream" U0 WHERE U0."realm_id" = 2
Future commits will address the codepath for creating users.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
Updates find_proper_insertion_index to check for the inline image
classes as matching at least one of the classes in the element's
attrib["class"] so that cases where an inline preview image has
multiple classes, like YouTube video previews, will have the
correct insertion index.
Fixes#26186.
Added an additional test case to `test_submessages.py` for testing the
message object containing `submessages` meta data.
Previous to this commit we were never validating the `submessage` schema
in the `message` objects.
Fixes#25896.
By relocating helper methods into a mixin class, we can be more flexible
with managing transactions in test cases, without always forcing the
django.test.TestCase behavior of always putting the test case into an
atomic transaction.
We include a check for side effects in ZulipTransactionTestCase. It only
checks for the set of row ids in all tables before and after each test.
It is not a comprehensive check for side effects, but should be
sufficient for the basics without much performance overhead.
It replaces the "File not found." text with:
"This file does not exist or has been deleted."
At present when a file is deleted it results in a confusing
experience when looking at the "File not found." message.
In order to clarify the situation is not a bug, the message
has been replaced with a better alternative.
Fixes part of Issue #23739.
This prep commit replaces the 'wildcard' keyword in the codebase
with 'stream_wildcard' at some places for better readability, as
we plan to introduce 'topic_wildcards' as a part of the
'@topic mention' project.
Currently, 'wildcards = ["all", "everyone", "stream"]' which is an
alias to mention everyone in the stream, hence better renamed as
'stream_wildcards'.
Eventually, we will have:
'stream_wildcard' as an alias to mention everyone in the stream.
'topic_wildcard' as an alias to mention everyone in the topic.
'wildcard' refers to 'stream_wildcard' and 'topic_wildcard' as a whole.
The 'get_gcm_alert' and 'get_apns_alert_subtitle' functions
don't include the case when the trigger is
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
This commit updates the functions to include
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
The emails sent for missed messages have a text at the bottom
explaining the reason why the email was sent.
This commit reorders the conditional statements in the email
template to align with the trigger priority order defined
in the 'get_email_notification_trigger'.
This commit fixes the incorrect calculation of the
'senders' list.
The effect of 'followed_topic_wildcard_mention'
wasn't considered earlier.
The bug was introduced in b052c8980e.
This commit uses 'NotificationTriggers' class attributes
instead of directly using loose strings.
This should have been ideally included in the commit
c3319a5231.
Combine nginx and Django middlware to stop putting misleading warnings
about `CSRF_TRUSTED_ORIGINS` when the issue is untrusted proxies.
This attempts to, in the error logs, diagnose and suggest next steps
to fix common proxy misconfigurations.
See also #24599 and zulip/docker-zulip#403.
Having exactly 17 or 18 middlewares, on Python 3.11.0 and above,
causes python to segfault when running tests with coverage; see
https://github.com/python/cpython/issues/106092
Work around this by adding one or two no-op middlewares if we would
hit those unlucky numbers. We only add them in testing, since
coverage is a requirement to trigger it, and there is no reason to
burden production with additional wrapping.
It takes about 31ms per page on my box, but 191
help pages adds up quickly. I am not sure how to
optimize this test, but it will be a good litmus
test for a future better markdown processor.
This did not speed up the tests as much as I expected,
but it certainly makes the code easier to read, and
Tim is pretty confident that the zephyr logic is
fairly stable, so it's sufficient to test it on a
subset of representative urls.
dbe930394f changed the
"missing string" from "Log in" to "xyz" for some
unknown reason. The current code makes no sense.
Also, even the original test code here had the common
pitfall of only testing one side of the condition.
Presumably if you are testing that a certain string
is missing in a landing-page scenario, then you also
want to check that it **does** exist in other
scenarios. Otherwise, the flag would have been
named something more generic. Of course, I am mostly
guessing due to lack of comments.
If there is some test logic here that we need to
resurrect, then we should just write a custom test
for the /hello page rather than crufting up
all our helpers.
This removes some confusing default boolean flags, and
it checks both sides of the do-you-want-to-allow-robots
condition, so it's more thorough.
For the two strange exceptions to the normal policy,
I now handle them together in the helper function with
a comment.
I also disentangle the logic to look for og tags from
the robot logic, and this should also lead to more
thorough testing.
The prior name was just strange. This test could really
use a better comment explaining its purpose.
Also, presumably these pages don't always get 404s, so
we should really have the test exercise both conditions.
This makes us correctly run landing page logic where we
didn't before, and, more importantly, lets us skip landing
page logic where we had been erroneously running it.
This speeds up my runs from 35s to 25s.
This commit updates the text on email confirmation page to
make it more clear what's going on and why the user needs
to check their email.
Fixes#25900.
This commit adds code to include can_mention_group_id field to
UserGroup objects passed with response of various endpoints
including "/register" endpoint and also in the group object
send with user group creation event.
Fixes a part of #25927.
This commit adds backend code to check whether a user is allowed
to mention a user group while editing a message as per
can_mention_group setting of that group.
Fixes a part of #25927.
This commit adds backend code to check whether user has permission
to mention a group while sending message as per the can_mention_group
setting of the group.
Fixes a part of #25927.
We now upstream the conversion of legacy tuples
into the callers of do_events_register. For the
codepath that builds the home view, this allows
for cleaner code in the caller. For the /register
endpoint, we have to do the conversion, but that
isn't super ugly, as that's an appropriate place
to deal with legacy formats and clean them up.
We do have to have do_events_register downgrade
the format back to tuples to pass them into
request_event_queue, because I don't want to
change any serialization formats. The conversion
is quite simple, and it has test coverage.
We eliminate 220 zephyr-related checks that are all fairly
expensive.
On my machine this test went from 46s to 23s.
Note that we still get coverage of the zephyr codepath
from other tests.
(All the same code gets executed here, but in a slightly
different order.)
There is some code duplication between the two new
helper functions, but I didn't make the situation any
worse, and it's slightly non-trivial to consolidate
the logic. Hopefully the long term strategy is to remove
the zephyr checks or at least isolate a single test for
any specific zephyr quirks that we need to maintain.
This is a first step toward two goals:
* support dictionary-like narrows when registering events
* use readable dataclasses internally
This is gonna be a somewhat complicated exercise due to how
events get serialized, but fortunately this interim step
doesn't require any serious shims, so it improves the codebase
even if the long-term goals may take a while to get sorted
out.
The two places where we have to use a helper to convert narrows
from tuples to dataclasses will eventually rely on their callers
to do the conversion, but I don't want to re-work the entire
codepath yet.
Note that the new NarrowTerm dataclass makes it more explicit
that the internal functions currently either don't care about
negated flags or downright don't support them. This way mypy
protects us from assuming that we can just add negated support
at the outer edges.
OTOH I do make a tiny effort here to slightly restructure
narrow_filter in a way that paves the way for negation support.
The bigger goal by far, though, is to at least support the
dictionary format.
In 2484d870b4 I created tests
using a fixture called narrow.json. I believe my intention
was to eventually use the fixture for similar tests on the
frontend, but that never happened.
Almost seven years later, I think it's time to just use
straightforward code in Python to test build_narrow_filter.
In particular, we want to move to dataclasses, so that would
create an addition nuisance for fixture-based tests. The
fixture was already annoying in terms of being an extra moving
part, being hard to read, and not being type-safe.
In order to avoid typos, I mostly code-generated the new
Python code by instrumenting the old test:
narrow_filter = build_narrow_filter(narrow)
+ print("###\n")
+ print(f"narrow_filter = build_narrow_filter({narrow})\n")
for e in accept_events:
message = e["message"]
flags = e["flags"]
@@ -610,6 +612,8 @@ class NarrowLibraryTest(ZulipTestCase):
if flags is None:
flags = []
self.assertTrue(narrow_filter(message=message, flags=flags))
+ print(f"self.assertTrue(narrow_filter(message={message}, flags={flags},))")
+ print()
for e in reject_events:
message = e["message"]
flags = e["flags"]
@@ -618,6 +622,8 @@ class NarrowLibraryTest(ZulipTestCase):
if flags is None:
flags = []
self.assertFalse(narrow_filter(message=message, flags=flags))
+ print(f"self.assertFalse(narrow_filter(message={message}, flags={flags},))")
+ print()
I then basically pasted the output in and ran black to format it.
We no longer pass in a big opaque event to narrow_filter
(which is inside build_narrow_filter). We instead explicitly
pass in message and flags. This leads to a bit more type
safety, and it's also more flexible. There's no reason to
build an entire event just to see if a message belongs to
a narrow.
The changes to the test work around the fact that the fixtures
are sloppy with types. I plan a subsequent commit to clean
up those tests significantly.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the AlertWord
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update realm_alert_words_cache_key,
realm_alert_words_automaton_cache_key, and flush_realm_alert_words
functions to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by these functions and there
is no need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the Attachment
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update
get_realm_used_upload_space_cache_key function to accept
realm_id as parameter instead of realm object, so that
the code for flushing the cache works even after the
realm is deleted. This change is fine because eventually
only realm_id is used by this function and there is no
need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the UserProfile
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update bot_dicts_in_realm_cache_key
function to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by this function and there is
no need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the RealmEmoji
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update get_realm_emoji_dicts,
get_realm_emoji_cache_key, get_active_realm_emoji_cache_key,
get_realm_emoji_uncached and get_active_realm_emoji_uncached
functions to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by these functions and
there is no need of the complete realm object.
Make the import of `Realm`, `Stream` and `UserGroup` objects be
done in single transaction, to make the import process in general
more atomic.
This also removes the need to temporarily unset the Stream references
on the Realm object. Since Django creates foreign key constraints
with `DEFERRABLE INITIALLY DEFERRED`, an insertion of a Realm row can
reference not-yet-existing Stream rows as long as the row is created
before the transaction commits.
Discussion - https://chat.zulip.org/#narrow/stream/101-design/topic/New.20permissions.20model/near/1585274.
This commit changes the code in test_user_groups.py to use
check_add_user_group function to create user groups instead
of directly using django ORM to make sure that settings
would be set to the correct defaults in further commits.
This commit adds default_group_name field to GroupPermissionSetting
type which will be used to store the name of the default group for
that setting which would in most cases be one of the role-based
system groups. This will be helpful when we would have multiple
settings and we would need to set the defaults while creating
realm and streams.
For tests that use the dev server, like test-api, test-js-with-puppeteer,
we don't have the consumers for the queues. As they eventually timeout,
we get unnecessary error messages. This adds a new flag, disable_timeout,
to disable this behavior for the test cases.
This endpoint was previously marked as `intentionally_undocumented`
but that was mistake.
Removed `intentionally_undocumented` and added proper documentation
with valid `python_example` for this Endpoint.
Fixes: #24084
This verifies that updates of the user group name/description are
correctly done by doing additional queries. This also empathsizes on
checking that the state before and after API calls are indeed different.
We extract the checks needed for user membership changes into a method,
verifying that the members of the user group are matching the expected
values exactly.
Adds testing coverage for validating the documented examples for
each event in the `api/get-events` endpoint documentation.
This will help us catch basic typos / mistakes when adding new
event examples. And if fields / objects are removed or modified
for existing events in the API, then failing to update the
examples for those changes will also be caught by this additional
test coverage.
Adding new fields / objects to existing event schemas without
updating the example will not be caught unless the new field
is marked as required in the documentation.
Updates the example for both of these events in the documentation
to be the current version. These were missed when the feature
level 35 updates were made to the API specification for these
events, see commit noted below.
Also, for completeness, adds Changes notes for feature level 35
and feature level 19, for these events.
The feature level 35 changes were made in commit 7ff3859136.
The feature level 19 changes were made in commit 00e60c0c91.
Updates the example for the realm_bot delete event so that it does
not have a full_name field.
This was a pre-existing error in the documentation when the remove
and delete events shared the same event documentation. They were
separated in the documentation in commit fae3f1ca53.
The difference between these two events was noted when they were
added to `event_schema.py` in commit 385050de20.
Updates the documented example for the update_message_flags remove
event so that the message ID that is the key for the object is
correctly shown as a string.
Also updates the description of these objects so that it is
rendered correctly in the documentation.
Removes the `sender_short_name` from the example for the message
event in `/get-events`.
Also, to make this complete, adds Changes notes for the feature
level 26 changes that were made to the message objects returned
in the message events for `/get-events` and in the messages
array for the `/get-messages` response.
The field was originally removed from message objects in
commit b375581f58.
Updates the main descriptions for the mute a user and unmute a
user endpoint documentation. Also, revises the `muted_user_id`
parameter description and changes note for feature level 188.
The original feature level changes were made in #26005.
This is a follow-up to 4c8915c8e4, for
the case when the `team:read` permission is missing, which causes the
`team.info` call itself to fail. The error message supplies
information about the provided and missing permissions -- but it also
still sends the `X-OAuth-Scopes` header which we normall read, so we can
use that as normal.
Updates the main description for the `get-stream-topics` endpoint
so that it is clear that the topics for private streams with protected
history are limited to the topics / messages the user has access to.
And updates that documentation and the help center documentation for
bot permissions / abilities, to clarify that bots have the same
restriction and can only access messages / topics that are sent after
the bot (not the bot's owner) subscribed to the stream.
This commit creates separate events for issue labeled and
unlabeled notifications. This allows the end-users to choose
whether they want these notifications or not.
Fixes#25789.
The `tabbed_instructions` widget used for both language toggles in our
API documentation and app toggles in our Help Center documentation
misleadingly calls the identifier for the tab `language` in local
variables and its interface.
- Renames local variables `language` -> `tab_key`.
- Renames HTML data attributes `data-language` -> `data-tab-key`.
Fixes#24669.
Updates the `api/subscribe` and `api/update-stream` endpoint docs
to note that streams' permissions impact whether a user/admin can
subscribe users and/or update a stream's permissions settings.
Updates the `api/archive-stream` and `api/delete-topic` endpoint
docs to note that they are only available to org admins.
This is primarily to prevent impersonation, such as `zulipteam`. We
only enable these protections for CORPORATE_ENABLED, since `zulip` is
a reasonable test name for self-hosters.
streaming_content is an iterator. Consuming it within middleware
prevents it from being sent to the browser.
https://docs.djangoproject.com/en/4.2/ref/request-response/#streaminghttpresponse-objects
“The StreamingHttpResponse … has no content attribute. Instead, it has
a streaming_content attribute. This can be used in middleware to wrap
the response iterable, but should not be consumed.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django-stubs 4.2.1 gives transaction.on_commit a more accurate type
annotation, but this exposed that mypy can’t handle the lambda default
parameters that we use to recapture loop variables such as
for stream_id in public_stream_ids:
peer_user_ids = …
event = …
transaction.on_commit(
lambda event=event, peer_user_ids=peer_user_ids: send_event(
realm, event, peer_user_ids
)
)
https://github.com/python/mypy/issues/15459
A workaround that mypy accepts is
transaction.on_commit(
(
lambda event, peer_user_ids: lambda: send_event(
realm, event, peer_user_ids
)
)(event, peer_user_ids)
)
But that’s kind of ugly and potentially error-prone, so let’s make a
helper function for this very common pattern.
send_event_on_commit(realm, event, peer_user_ids)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
9d97af6ebb addressed the one major source of inconsistent data which
would be solved by simply re-attempting the ScheduledEmail row. Every
other instance that we have seen since then has been a corrupt or
modified database in some way, which does not self-resolve. This
results in an endless stream of emails to the administrator, and no
forward progress.
Drop this to a warning, and make it remove the offending row. This
ensures we make forward progress.
This commit makes it possible for users to control the
audible desktop notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting through the UI yet.
This commit makes it possible for users to control the
visual desktop notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting through the UI yet.
This commit makes it possible for users to control the wildcard
mention notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the push notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the email notifications for messages sent to followed topics
via a global notification setting.
Although there is no support for configuring this setting
through the UI yet.
Add five new fields to the UserBaseSettings class for
the "followed topic notifications" feature, similar to
stream notifications. But this commit consists only of
the implementation of email notifications.
THUMBNAIL_IMAGES was previously set to true as there were tests on a new
thumbnail functionality. The feature was never stable enough to remain in
the codebase and the setting was left enabled. This setting also doesn't
reflect how the production deployments are and it has been decided that we
should drop setting from test_extra_settings altogether.
Co-authored-by: Joseph Ho <josephho678@gmail.com>
Failing to remove all of the rules which were added causes action at a
distance with other tests. The two methods were also only used by
test code, making their existence in zerver.lib.rate_limiter clearly
misplaced.
This fixes one instance of a mis-balanced add/remove, which caused
tests to start failing if run non-parallel and one more anonymous
request was added within a rate-limit-enabled block.
The user group depedency graph should always be a DAG.
This commit adds code to make sure we keep the graph DAG
while adding subgroups to a user group.
Fixes#25913.