The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
The followup_day2 email is scheduled with a delay as a welcome email
and is therefore more likely to exist as a scheduled email in these
deactivation cases.
Updates comment to not include the number of emails generated so
that it doesn't need to be updated every time a new email is added.
The current count in the comment is already out-of-date.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
This changes bulk_get_streams so that it just uses the
database all the time. Also, we avoid calling
select_related(), so that we just get back thin and
tidy Stream objects with simple queries.
About not caching any more:
It's actually pretty rare that we fetch streams by name
in the main application. It's usually API requests that
send in stream names to find more info about streams.
It also turns out that for large queries (>= ~30 rows
for my measurements) it's more efficent to hit the
database than memcached. The database is super fast at
scale; it's just the startup cost of having Django
construct the query, and then having the database do
query planning or whatever, that slows us down. I don't
know the exact bottleneck, but you can clearly measure
that one-row queries are slow (on the order of a full
millisecond or so) but the marginal cost of additional
rows is minimal assuming you have a decent index (20
microseconds per row on my droplet).
All the query-count changes in the tests revolve around
unsubscribing somebody from a stream, and that's a
particularly odd use case for bulk_get_streams, since
you generally unsubscribe from a single stream at a
time. If there are some use cases where you do want to
unsubscribe from multiple streams, we should move
toward passing in stream ids, at least from the
application. And even if we don't do that, our cost for
most queries is a couple milliseconds.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.
This commit creates separate events for issue milestoned and
demilestoned notifications. This allows the end-users to choose
whether they want these notifications or not.
Fixes#25793.
This add audit log entries when any group based setting of a user group
is updated. We store both the old and new values in extra_data, along
with the name of that setting. Entries populated during user group creation
are hardcoded to track "can_mention_group".
Potentially we can adjust "set_defaults_for_group_settings" so that it
populates realm audit logs with it, but that is out of scope for this change.
We use an atomic transaction so that the audit logs are committed
together with the updates.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This add audit log entries when the name or description of a user group
is updated. We store both the old and new values in extra_data. We wrap
the functions inside an atomic transaction so that the audit logs and
the updates are committed together.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mostly the same as tracking subgroup changes, except that now
modified_user_group is the subgroup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It's worth noting that instead of adding another field to the
RealmAuditLog model, we store the modified subgroup ids in extra_data as
a JSON encoded dict with the key "subgroup_ids". We don't create audit
log entries for supergroup changes at this point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This also add audit log entries during user creation and role change,
because we modify system group memberships there.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We also create RealmAuditLog entries for the initial memberships that
get added along with the creation of a UserGroup. System user groups are
not created with members so no audit logs are populated for that.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This helps reduce the impact on busy uwsgi processes in case there are
slow timeout failures of Sentry servers. The p99 is less than 300ms,
and p99.9 per day peaks at around 1s, so this will not affect more
than .1% of requests in normal operation.
This is not a complete solution (see #26229); it is merely stop-gap
mitigation.
Various cleanups:
* clean up comments
* improve names for constants and variables
* express first ORM query as a single statement
* use set differences to simplify logic
* avoid all the reversing churn
* avoid early-exit idiom since this function is so small
Note that it's plausible that we should just combine the two
queries and let the database exclude the already-used ids,
but that felt a little risky for now. As I mentioned on
Zulip, I think the one-week window has dubious value, but
I am biased by having wasted time chasing down a test
flake related to the time window.
Basically, I eliminate the use of select_all() in a query
that still makes a single round trip. We have good test
enforcement that Django never needs to lazily fetch
objects off the Stream object. (It used to be common
to fetch stream.realm a while back, but we upgraded
bulk_add_subscription, in particular, a while back.)
We extract code from process_new_human_user with
no modifications.
This has all the best outcomes of extracting a function:
* better profile info
* easier to test for query counts (signup gets real noisy)
* simplifies a long, messy function
It has no real drawbacks, since the helper function doesn't need
to pass back any intermediate state to the parent for the rest
of what the parent does.
When you profile test_signup and test_invite, with a decent
sample size, the set_up_streams_for_new_human_user function
does about 20% of the work for process_new_human_user, which
is a lot considering that most tests don't create a ton of
pre-registered or default streams.
At least as measured by test_events.py, which has over 1000
calls to fetch initial data for page loads, this should
be about a 10% improvement in how much time the server
spends fetching data.
We mostly avoid a select_related() query that did this nastiness:
INNER JOIN "zerver_realm" ON ("zerver_stream"."realm_id" = "zerver_realm"."id")
INNER JOIN "zerver_usergroup" ON ("zerver_stream"."can_remove_subscribers_group_id" = "zerver_usergroup"."id")
INNER JOIN "zerver_realm" T4 ON ("zerver_usergroup"."realm_id" = T4."id")
INNER JOIN "zerver_usergroup" T5 ON ("zerver_usergroup"."can_mention_group_id" = T5."id")
INNER JOIN "zerver_realm" T6 ON (T5."realm_id" = T6."id")
INNER JOIN "zerver_usergroup" T7 ON (T5."can_mention_group_id" = T7."id")
INNER JOIN "zerver_realm" T8 ON (T7."realm_id" = T8."id")
INNER JOIN "zerver_usergroup" T9 ON (T7."can_mention_group_id" = T9."id")
INNER JOIN "zerver_realm" T10 ON (T9."realm_id" = T10."id")
INNER JOIN "zerver_usergroup" T11 ON (T9."can_mention_group_id" = T11."id")
WHERE "zerver_stream"."id" IN (SELECT U0."stream_id" FROM "zerver_defaultstream" U0 WHERE U0."realm_id" = 2
Future commits will address the codepath for creating users.
I created zerver/lib/default_streams.py, so that various
views and events.py don't have to awkwardly reach into
an "actions" file.
I copied over two functions verbatim from actions/default_streams.py:
get_default_streams_for_realm
streams_to_dicts_sorted
The latter only remains as an internal detail in the new library.
I also created two new helpers:
get_default_stream_ids_for_realm:
This is both faster and easier to use in all the places
where we only need to get a set of default stream ids.
get_default_streams_for_realm_as_dicts:
This just wraps the prior calls to
streams_to_dicts_sorted(get_default_streams_for_realm(...)),
and it doesn't yet address the slowness of the underlying
code.
All the "real" code should be functionally the same.
In a few tests I now use this wrapper instead of
calling get_default_streams_for_realm, just to get
slightly deeper coverage.
Updates find_proper_insertion_index to check for the inline image
classes as matching at least one of the classes in the element's
attrib["class"] so that cases where an inline preview image has
multiple classes, like YouTube video previews, will have the
correct insertion index.
Fixes#26186.
Added an additional test case to `test_submessages.py` for testing the
message object containing `submessages` meta data.
Previous to this commit we were never validating the `submessage` schema
in the `message` objects.
Fixes#25896.
By relocating helper methods into a mixin class, we can be more flexible
with managing transactions in test cases, without always forcing the
django.test.TestCase behavior of always putting the test case into an
atomic transaction.
We include a check for side effects in ZulipTransactionTestCase. It only
checks for the set of row ids in all tables before and after each test.
It is not a comprehensive check for side effects, but should be
sufficient for the basics without much performance overhead.
It replaces the "File not found." text with:
"This file does not exist or has been deleted."
At present when a file is deleted it results in a confusing
experience when looking at the "File not found." message.
In order to clarify the situation is not a bug, the message
has been replaced with a better alternative.
Fixes part of Issue #23739.
This prep commit replaces the 'wildcard' keyword in the codebase
with 'stream_wildcard' at some places for better readability, as
we plan to introduce 'topic_wildcards' as a part of the
'@topic mention' project.
Currently, 'wildcards = ["all", "everyone", "stream"]' which is an
alias to mention everyone in the stream, hence better renamed as
'stream_wildcards'.
Eventually, we will have:
'stream_wildcard' as an alias to mention everyone in the stream.
'topic_wildcard' as an alias to mention everyone in the topic.
'wildcard' refers to 'stream_wildcard' and 'topic_wildcard' as a whole.
The 'get_gcm_alert' and 'get_apns_alert_subtitle' functions
don't include the case when the trigger is
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
This commit updates the functions to include
'NotificationTriggers.FOLLOWED_TOPIC_WILDCARD_MENTION'.
The emails sent for missed messages have a text at the bottom
explaining the reason why the email was sent.
This commit reorders the conditional statements in the email
template to align with the trigger priority order defined
in the 'get_email_notification_trigger'.
This commit fixes the incorrect calculation of the
'senders' list.
The effect of 'followed_topic_wildcard_mention'
wasn't considered earlier.
The bug was introduced in b052c8980e.
This commit uses 'NotificationTriggers' class attributes
instead of directly using loose strings.
This should have been ideally included in the commit
c3319a5231.
Combine nginx and Django middlware to stop putting misleading warnings
about `CSRF_TRUSTED_ORIGINS` when the issue is untrusted proxies.
This attempts to, in the error logs, diagnose and suggest next steps
to fix common proxy misconfigurations.
See also #24599 and zulip/docker-zulip#403.
Having exactly 17 or 18 middlewares, on Python 3.11.0 and above,
causes python to segfault when running tests with coverage; see
https://github.com/python/cpython/issues/106092
Work around this by adding one or two no-op middlewares if we would
hit those unlucky numbers. We only add them in testing, since
coverage is a requirement to trigger it, and there is no reason to
burden production with additional wrapping.
It takes about 31ms per page on my box, but 191
help pages adds up quickly. I am not sure how to
optimize this test, but it will be a good litmus
test for a future better markdown processor.
This did not speed up the tests as much as I expected,
but it certainly makes the code easier to read, and
Tim is pretty confident that the zephyr logic is
fairly stable, so it's sufficient to test it on a
subset of representative urls.
dbe930394f changed the
"missing string" from "Log in" to "xyz" for some
unknown reason. The current code makes no sense.
Also, even the original test code here had the common
pitfall of only testing one side of the condition.
Presumably if you are testing that a certain string
is missing in a landing-page scenario, then you also
want to check that it **does** exist in other
scenarios. Otherwise, the flag would have been
named something more generic. Of course, I am mostly
guessing due to lack of comments.
If there is some test logic here that we need to
resurrect, then we should just write a custom test
for the /hello page rather than crufting up
all our helpers.
This removes some confusing default boolean flags, and
it checks both sides of the do-you-want-to-allow-robots
condition, so it's more thorough.
For the two strange exceptions to the normal policy,
I now handle them together in the helper function with
a comment.
I also disentangle the logic to look for og tags from
the robot logic, and this should also lead to more
thorough testing.
The prior name was just strange. This test could really
use a better comment explaining its purpose.
Also, presumably these pages don't always get 404s, so
we should really have the test exercise both conditions.
This makes us correctly run landing page logic where we
didn't before, and, more importantly, lets us skip landing
page logic where we had been erroneously running it.
This speeds up my runs from 35s to 25s.
This commit updates the text on email confirmation page to
make it more clear what's going on and why the user needs
to check their email.
Fixes#25900.
This commit adds code to include can_mention_group_id field to
UserGroup objects passed with response of various endpoints
including "/register" endpoint and also in the group object
send with user group creation event.
Fixes a part of #25927.
This commit adds backend code to check whether a user is allowed
to mention a user group while editing a message as per
can_mention_group setting of that group.
Fixes a part of #25927.
This commit adds backend code to check whether user has permission
to mention a group while sending message as per the can_mention_group
setting of the group.
Fixes a part of #25927.
We now upstream the conversion of legacy tuples
into the callers of do_events_register. For the
codepath that builds the home view, this allows
for cleaner code in the caller. For the /register
endpoint, we have to do the conversion, but that
isn't super ugly, as that's an appropriate place
to deal with legacy formats and clean them up.
We do have to have do_events_register downgrade
the format back to tuples to pass them into
request_event_queue, because I don't want to
change any serialization formats. The conversion
is quite simple, and it has test coverage.
We eliminate 220 zephyr-related checks that are all fairly
expensive.
On my machine this test went from 46s to 23s.
Note that we still get coverage of the zephyr codepath
from other tests.
(All the same code gets executed here, but in a slightly
different order.)
There is some code duplication between the two new
helper functions, but I didn't make the situation any
worse, and it's slightly non-trivial to consolidate
the logic. Hopefully the long term strategy is to remove
the zephyr checks or at least isolate a single test for
any specific zephyr quirks that we need to maintain.
This is a first step toward two goals:
* support dictionary-like narrows when registering events
* use readable dataclasses internally
This is gonna be a somewhat complicated exercise due to how
events get serialized, but fortunately this interim step
doesn't require any serious shims, so it improves the codebase
even if the long-term goals may take a while to get sorted
out.
The two places where we have to use a helper to convert narrows
from tuples to dataclasses will eventually rely on their callers
to do the conversion, but I don't want to re-work the entire
codepath yet.
Note that the new NarrowTerm dataclass makes it more explicit
that the internal functions currently either don't care about
negated flags or downright don't support them. This way mypy
protects us from assuming that we can just add negated support
at the outer edges.
OTOH I do make a tiny effort here to slightly restructure
narrow_filter in a way that paves the way for negation support.
The bigger goal by far, though, is to at least support the
dictionary format.
In 2484d870b4 I created tests
using a fixture called narrow.json. I believe my intention
was to eventually use the fixture for similar tests on the
frontend, but that never happened.
Almost seven years later, I think it's time to just use
straightforward code in Python to test build_narrow_filter.
In particular, we want to move to dataclasses, so that would
create an addition nuisance for fixture-based tests. The
fixture was already annoying in terms of being an extra moving
part, being hard to read, and not being type-safe.
In order to avoid typos, I mostly code-generated the new
Python code by instrumenting the old test:
narrow_filter = build_narrow_filter(narrow)
+ print("###\n")
+ print(f"narrow_filter = build_narrow_filter({narrow})\n")
for e in accept_events:
message = e["message"]
flags = e["flags"]
@@ -610,6 +612,8 @@ class NarrowLibraryTest(ZulipTestCase):
if flags is None:
flags = []
self.assertTrue(narrow_filter(message=message, flags=flags))
+ print(f"self.assertTrue(narrow_filter(message={message}, flags={flags},))")
+ print()
for e in reject_events:
message = e["message"]
flags = e["flags"]
@@ -618,6 +622,8 @@ class NarrowLibraryTest(ZulipTestCase):
if flags is None:
flags = []
self.assertFalse(narrow_filter(message=message, flags=flags))
+ print(f"self.assertFalse(narrow_filter(message={message}, flags={flags},))")
+ print()
I then basically pasted the output in and ran black to format it.
We no longer pass in a big opaque event to narrow_filter
(which is inside build_narrow_filter). We instead explicitly
pass in message and flags. This leads to a bit more type
safety, and it's also more flexible. There's no reason to
build an entire event just to see if a message belongs to
a narrow.
The changes to the test work around the fact that the fixtures
are sloppy with types. I plan a subsequent commit to clean
up those tests significantly.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the AlertWord
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update realm_alert_words_cache_key,
realm_alert_words_automaton_cache_key, and flush_realm_alert_words
functions to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by these functions and there
is no need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the Attachment
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update
get_realm_used_upload_space_cache_key function to accept
realm_id as parameter instead of realm object, so that
the code for flushing the cache works even after the
realm is deleted. This change is fine because eventually
only realm_id is used by this function and there is no
need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the UserProfile
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update bot_dicts_in_realm_cache_key
function to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by this function and there is
no need of the complete realm object.
Subsequent commits will add "on_delete=models.RESTRICT"
relationships, which will result in the RealmEmoji
objects being deleted after Realm has been deleted from
the database.
In order to handle this, we update get_realm_emoji_dicts,
get_realm_emoji_cache_key, get_active_realm_emoji_cache_key,
get_realm_emoji_uncached and get_active_realm_emoji_uncached
functions to accept realm_id as parameter instead of realm
object, so that the code for flushing the cache works even
after the realm is deleted. This change is fine because
eventually only realm_id is used by these functions and
there is no need of the complete realm object.
Make the import of `Realm`, `Stream` and `UserGroup` objects be
done in single transaction, to make the import process in general
more atomic.
This also removes the need to temporarily unset the Stream references
on the Realm object. Since Django creates foreign key constraints
with `DEFERRABLE INITIALLY DEFERRED`, an insertion of a Realm row can
reference not-yet-existing Stream rows as long as the row is created
before the transaction commits.
Discussion - https://chat.zulip.org/#narrow/stream/101-design/topic/New.20permissions.20model/near/1585274.
This commit changes the code in test_user_groups.py to use
check_add_user_group function to create user groups instead
of directly using django ORM to make sure that settings
would be set to the correct defaults in further commits.
This commit adds default_group_name field to GroupPermissionSetting
type which will be used to store the name of the default group for
that setting which would in most cases be one of the role-based
system groups. This will be helpful when we would have multiple
settings and we would need to set the defaults while creating
realm and streams.
For tests that use the dev server, like test-api, test-js-with-puppeteer,
we don't have the consumers for the queues. As they eventually timeout,
we get unnecessary error messages. This adds a new flag, disable_timeout,
to disable this behavior for the test cases.
This endpoint was previously marked as `intentionally_undocumented`
but that was mistake.
Removed `intentionally_undocumented` and added proper documentation
with valid `python_example` for this Endpoint.
Fixes: #24084
This verifies that updates of the user group name/description are
correctly done by doing additional queries. This also empathsizes on
checking that the state before and after API calls are indeed different.
We extract the checks needed for user membership changes into a method,
verifying that the members of the user group are matching the expected
values exactly.
Adds testing coverage for validating the documented examples for
each event in the `api/get-events` endpoint documentation.
This will help us catch basic typos / mistakes when adding new
event examples. And if fields / objects are removed or modified
for existing events in the API, then failing to update the
examples for those changes will also be caught by this additional
test coverage.
Adding new fields / objects to existing event schemas without
updating the example will not be caught unless the new field
is marked as required in the documentation.
Updates the example for both of these events in the documentation
to be the current version. These were missed when the feature
level 35 updates were made to the API specification for these
events, see commit noted below.
Also, for completeness, adds Changes notes for feature level 35
and feature level 19, for these events.
The feature level 35 changes were made in commit 7ff3859136.
The feature level 19 changes were made in commit 00e60c0c91.
Updates the example for the realm_bot delete event so that it does
not have a full_name field.
This was a pre-existing error in the documentation when the remove
and delete events shared the same event documentation. They were
separated in the documentation in commit fae3f1ca53.
The difference between these two events was noted when they were
added to `event_schema.py` in commit 385050de20.
Updates the documented example for the update_message_flags remove
event so that the message ID that is the key for the object is
correctly shown as a string.
Also updates the description of these objects so that it is
rendered correctly in the documentation.
Removes the `sender_short_name` from the example for the message
event in `/get-events`.
Also, to make this complete, adds Changes notes for the feature
level 26 changes that were made to the message objects returned
in the message events for `/get-events` and in the messages
array for the `/get-messages` response.
The field was originally removed from message objects in
commit b375581f58.
Updates the main descriptions for the mute a user and unmute a
user endpoint documentation. Also, revises the `muted_user_id`
parameter description and changes note for feature level 188.
The original feature level changes were made in #26005.
This is a follow-up to 4c8915c8e4, for
the case when the `team:read` permission is missing, which causes the
`team.info` call itself to fail. The error message supplies
information about the provided and missing permissions -- but it also
still sends the `X-OAuth-Scopes` header which we normall read, so we can
use that as normal.
Updates the main description for the `get-stream-topics` endpoint
so that it is clear that the topics for private streams with protected
history are limited to the topics / messages the user has access to.
And updates that documentation and the help center documentation for
bot permissions / abilities, to clarify that bots have the same
restriction and can only access messages / topics that are sent after
the bot (not the bot's owner) subscribed to the stream.
This commit creates separate events for issue labeled and
unlabeled notifications. This allows the end-users to choose
whether they want these notifications or not.
Fixes#25789.
The `tabbed_instructions` widget used for both language toggles in our
API documentation and app toggles in our Help Center documentation
misleadingly calls the identifier for the tab `language` in local
variables and its interface.
- Renames local variables `language` -> `tab_key`.
- Renames HTML data attributes `data-language` -> `data-tab-key`.
Fixes#24669.
Updates the `api/subscribe` and `api/update-stream` endpoint docs
to note that streams' permissions impact whether a user/admin can
subscribe users and/or update a stream's permissions settings.
Updates the `api/archive-stream` and `api/delete-topic` endpoint
docs to note that they are only available to org admins.
This is primarily to prevent impersonation, such as `zulipteam`. We
only enable these protections for CORPORATE_ENABLED, since `zulip` is
a reasonable test name for self-hosters.
streaming_content is an iterator. Consuming it within middleware
prevents it from being sent to the browser.
https://docs.djangoproject.com/en/4.2/ref/request-response/#streaminghttpresponse-objects
“The StreamingHttpResponse … has no content attribute. Instead, it has
a streaming_content attribute. This can be used in middleware to wrap
the response iterable, but should not be consumed.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django-stubs 4.2.1 gives transaction.on_commit a more accurate type
annotation, but this exposed that mypy can’t handle the lambda default
parameters that we use to recapture loop variables such as
for stream_id in public_stream_ids:
peer_user_ids = …
event = …
transaction.on_commit(
lambda event=event, peer_user_ids=peer_user_ids: send_event(
realm, event, peer_user_ids
)
)
https://github.com/python/mypy/issues/15459
A workaround that mypy accepts is
transaction.on_commit(
(
lambda event, peer_user_ids: lambda: send_event(
realm, event, peer_user_ids
)
)(event, peer_user_ids)
)
But that’s kind of ugly and potentially error-prone, so let’s make a
helper function for this very common pattern.
send_event_on_commit(realm, event, peer_user_ids)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
9d97af6ebb addressed the one major source of inconsistent data which
would be solved by simply re-attempting the ScheduledEmail row. Every
other instance that we have seen since then has been a corrupt or
modified database in some way, which does not self-resolve. This
results in an endless stream of emails to the administrator, and no
forward progress.
Drop this to a warning, and make it remove the offending row. This
ensures we make forward progress.
This commit makes it possible for users to control the
audible desktop notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting through the UI yet.
This commit makes it possible for users to control the
visual desktop notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting through the UI yet.
This commit makes it possible for users to control the wildcard
mention notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the push notifications for messages sent to followed topics
via a global notification setting.
There is no support for configuring this setting
through the UI yet.
This commit makes it possible for users to control
the email notifications for messages sent to followed topics
via a global notification setting.
Although there is no support for configuring this setting
through the UI yet.
Add five new fields to the UserBaseSettings class for
the "followed topic notifications" feature, similar to
stream notifications. But this commit consists only of
the implementation of email notifications.
THUMBNAIL_IMAGES was previously set to true as there were tests on a new
thumbnail functionality. The feature was never stable enough to remain in
the codebase and the setting was left enabled. This setting also doesn't
reflect how the production deployments are and it has been decided that we
should drop setting from test_extra_settings altogether.
Co-authored-by: Joseph Ho <josephho678@gmail.com>
Failing to remove all of the rules which were added causes action at a
distance with other tests. The two methods were also only used by
test code, making their existence in zerver.lib.rate_limiter clearly
misplaced.
This fixes one instance of a mis-balanced add/remove, which caused
tests to start failing if run non-parallel and one more anonymous
request was added within a rate-limit-enabled block.
The user group depedency graph should always be a DAG.
This commit adds code to make sure we keep the graph DAG
while adding subgroups to a user group.
Fixes#25913.
We want to make sure that the system groups, once created, will always
have the GroupGroupMemberships fully set up.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Note that we use the DjangoJSONEncoder so that we have builtin support
for parsing Decimal and datetime.
During this intermediate state, the migration that creates
extra_data_json field has been run. We prepare for running the backfilling
migration that populates extra_data_json from extra_data.
This change implements double-write, which is important to keep the
state of extra data consistent. For most extra_data usage, this is
handled by the overriden `save` method on `AbstractRealmAuditLog`, where
we either generates extra_data_json using orjson.loads or
ast.literal_eval.
While backfilling ensures that old realm audit log entries have
extra_data_json populated, double-write ensures that any new entries
generated will also have extra_data_json set. So that we can then safely
rename extra_data_json to extra_data while ensuring the non-nullable
invariant.
For completeness, we additionally set RealmAuditLog.NEW_VALUE for
the USER_FULL_NAME_CHANGED event. This cannot be handled with the
overridden `save`.
This addresses: https://github.com/zulip/zulip/pull/23116#discussion_r1040277795
Note that extra_data_json at this point is not used yet. So the test
cases do not need to switch to testing extra_data_json. This is later
done after we rename extra_data_json to extra_data.
Double-write for the remote server audit logs is special, because we only
get the dumped bytes from an external source. Luckily, none of the
payload carries extra_data that is not generated using orjson.dumps for
audit logs of event types in SYNC_BILLING_EVENTS. This can be verified
by looking at:
`git grep -A 6 -E "event_type=.*(USER_CREATED|USER_ACTIVATED|USER_DEACTIVATED|USER_REACTIVATED|USER_ROLE_CHANGED|REALM_DEACTIVATED|REALM_REACTIVATED)"`
Therefore, we just need to populate extra_data_json doing an
orjson.loads call after a None-check.
Co-authored-by: Zixuan James Li <p359101898@gmail.com>
This in-progress feature was started in 2018 and hasn't
been worked on much since. It's already in a broken state,
which makes it hard to iterate on the existing search bar
since it's hard to know how those changes will affect search
pills.
We do still want to add search pills eventually, and when
we work on that, we can refer to this diff to readd the
changes back.
An implicit coercion from an untyped dict to the TypedDict was hiding
a type error: CapturedQuery.sql was really str, not bytes. We should
always prefer dataclass over TypedDict to prevent such errors.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This adds support to accepting extra_data being dict from remote
servers' RealmAuditLog entries. So that it is forward-compatible with
servers that have migrated to use JSONField for RealmAuditLog just in
case. This prepares us for migrating zilencer's audit log models to use
JSONField for extra_data.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This prepares for the audit log migration which requires us to populate
a JSONField from the extra_data field. "data" is not representative of
the actual extra_data field for RealmAuditLog entries of event types
in SYNC_BILLING_EVENTS.
We intentionally leave the test cases unchanged without bothering to
verify if the extra_data arrives as-is to keep this change minimal.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
It turns out that for some some deployments, there exists a second,
duplicate, foreign key constraint for user_profile_id. The logic below
would try to rename both to the same name, which would fail on the
second:
```
psycopg2.errors.DuplicateObject: constraint "zerver_userpresenceo_user_profile_id_d75366d6_fk_zerver_us" for relation "zerver_userpresence" already exists
```
Eliminate the duplicate constraint, rather than attempting to rename
it. Also add a block, in case of future reuse of this pattern, which
caveats that this approach will not work in the presence of
explicitly-named indexes. UserPresence happens to not have any, so
this technique is safe in this instance.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Before this commit our docs mentioned `string[]` data type for
`submessages` field on the `message` object. This commit changes the
type to `object[]` and correctly mentions all fields of the `submessage`
object.
This code clearly meant to return host and returning realm.host is a
mistake. realm.host is not accessible in a migration due to being a
@property-decorated method. The code constructs the host var value just
above this line.
Revises the API changelog entry for feature level 161 to document
the changes to `DELETE /users/me/subscriptions` and to explain
more clearly what the new `can_remove_subscribers_group_id`
parameter does.
Updates the feature level 161 changes notes and related descriptions
to include links and also more clearly explain the updates.
Also, updates the `GET /user_groups` example to better reflect what
is returned for system groups since this is now referenced in the
`can_remove_subscribers_group_id` parameter description.
The original API feature level 161 API documentation changes were
made in commit c3759814be and commit 73f11853ec.
Updates the scheduled_message_id parameter for deleting scheduled
messages to use the to_non_negative_int converter function for
validation, which is used in other endpoints/views with an ID in
the request path.
This commit adds the missing 'UNMUTED' visibility policy
to the documentation for 'api/get-events' and 'api/register-queue'.
It replaces INHERIT with NONE for a clearer name
in the 'api/update-user-topic' documentation.
Other smaller changes in wording to improve readability.
Twitter removed their v1 API. We take care to keep the existing cached
results around for now, and to not poison that cache, since we might
be able replace this with something that can still use the existing
cache.
Part of splitting creating and editing scheduled messages.
Final commit. Should be merged with previous commits in series.
Updates the API documentation for the new endpoint for editing
scheduled messages.
Part of splitting creating and editing scheduled messages.
Should be merged with final commit in series. Breaks tests.
Splits out editing an existing scheduled message into a new
view function and updated `edit_scheduled_message` function.
Part of splitting creating and editing scheduled messages.
Should be merged with final commit in series. Breaks tests.
Removes `scheduled_message_id` parameter from the create scheduled
message path.
Prep commit for splitting create/edit endpoint for scheduled
messages.
Because of `test-api` runs the tests in alphabetical order based on
the `operationId`, we need two scheduled messages in the test database.
The first for the curl example delete (delete-scheduled-message) and
the second for the curl example update (update-scheduled-message).
Adds API changelog feature level 1 and associated Changes notes
for when the `stream_id` parameter in the `PATCH /messages/message_id`
was added, and for when the `prev_stream` field was added to edit
history information for messages.
We're adding these to the Zulip 3.0 feature level 1 because
commit 843345dfee that introduced this field and this parameter
to the server / backend code was merged before the commit that added
the API feature level tracking, commit e3b90a5ec8, at level 1.
Updates the descriptions of the `avatar_url` field in message and
user objects to be clear that the current user must have access
to the other user's real email address in order for the value to
ever be `null`.
Also adds a bullet point to the API changelog feature level 163
entry about this change.
Clarifies additional areas of the API documentation where a user's
email is mentioned / used where it could be useful to clarify
that the email in question is the "Zulip API email".
This commit removes realm_community_topic_editing_limit_seconds
field from register response since topic edit limit is now
controlled by move_messages_within_streams_limit_seconds
setting.
We also remove DEFAULT_COMMUNITY_TOPIC_EDITING_LIMIT_SECONDS
constant since it is no longer used.
In the register response properties deprecated at feature level 89,
update the descriptions to link to the client_capabilities parameter
when referenced.
Also, moves the enter_send property to be in the same section of the
register response as other properties deprecated at this feature level.
These descriptions were originally added to these properties in
commit e6f828a8e2.
As the relevant comment elaborates - what happens next in the test in
simulating the step that happens in the desktop app. Thus a new session
needs to be used. Otherwise, the old session created normally in the
browser pollutes the state and can give falsely passing tests.
This should be happening for all social auth tests using this, not just
in that one SAML test, thus moving it inside the helper method.
This is a useful improvement in general for making correct
LogoutRequests to Idps and a necessary one to make SP-initiated logout
fully work properly in the desktop application. During desktop auth
flow, the user goes through the browser, where they log in through their
IdP. This gives them a logged in browser session at the IdP. However,
SAML SP-initiated logout is fully conducted within the desktop
application. This means that proper information needs to be given to the
the IdP in the LogoutRequest to let it associate the LogoutRequest with
that logged in session that was established in the browser. SessionIndex
is exactly the tool for that in the SAML spec.
This gives more flexibility on a server with multiple organizations and
SAML IdPs. Such a server can have some organizations handled by IdPs
with SLO set up, and some without it set up. In such a scenario, having
a generic True/False server-wide setting is insufficient and instead
being able to specify the IdPs/orgs for SLO is needed.
Closes#20084
This is the flow that this implements:
1. A logged-in user clicks "Logout".
2. If they didn't auth via SAML, just do normal logout. Otherwise:
3. Form a LogoutRequest and redirect the user to
https://idp.example.com/slo-endpoint?SAMLRequest=<LogoutRequest here>
4. The IdP validates the LogoutRequest, terminates its own user session
and redirects the user to
https://thezuliporg.example.com/complete/saml/?SAMLRequest=<LogoutResponse>
with the appropriate LogoutResponse. In case of failure, the
LogoutResponse is expected to express that.
5. Zulip validates the LogoutResponse and if the response is a success
response, it executes the regular Zulip logout and the full flow is
finished.
Expands the main description for the `/update-message` documentation
to include a list of the realm settings in the API that are relevant
to when users can update a message's content, topic or stream.
Adjusts the descriptions of realm_linkifiers (and deprecated
realm_filters) events and register response fields so that the
description of the current API is complete without the feature
level 176 **Changes** notes.
Adds examples of the regex pattern and old URL string format to
the deprecated `realm_filters` event and register response field.
The examples are in the prose description since the events are
no longer sent and therefore no longer tested.
Revises API changelog entry for missing endpoint method and to
clarify the overall text.
Updates Changes notes for feature level 176 to not have repetitive
text, so that the updates were clearer and more concise.
The original commit with the changes related to this API changelog
entry is commit 268f858f39.
This commit updates the API to check the permission to subscribe other
users while inviting. The API will error if the user passes the
"stream_ids" parameter (even when it contains only default streams)
and the calling user does not having permission to subscribe others to
streams.
For users who do not have permission to subscribe others, the
invitee will be subscribed to default streams at the time of
accepting the invite.
There is no change for multiuse invites, since only admins are allowed
to send them, and admins always have the permission to subscribe
others to streams.
Since 74dd21c8fa in Zulip Server 2.1.0, if:
- ZulipLDAPAuthBackend and an external authentication backend (any aside
of ZulipLDAPAuthBackend and EmailAuthBackend) are the only ones
enabled in AUTHENTICATION_BACKENDS in /etc/zulip/settings.py
- The organization permissions don't require invitations to join
...then an attacker can create a new account in the organization with
an arbitrary email address in their control that's not in the
organization's LDAP directory.
The impact is limited to installations which have the specific
combination of authentication backends described above, in addition to
having the "Invitations are required for joining this organization
organization" permission disabled.
This argument was added with the default incorrectly set to `True` in
bb0eb76bf3 - despite
`maybe_send_to_registration` only ever being called in production code
in a single place, with `password_required=False` explicitly. And then
it just got carried forward through refactors.
`maybe_send_to_registration` was/is also called twice in tests, falling
back to the default, but the `password_required` value is irrelevant to
the tests - and if anything letting it use the `True` has been wrong,
due to not matching how this function is actually used.
This prevents `get_user_profile_by_api_key` from doing a sequential
scan.
Doing this requires moving the generation of initial api_key values
into the column definition, so that even bare calls to
`UserProfile.objects.create` (e.g. from tests) call appropriately
generate a random initial value.
Adds an API changelog note to 2.1 for the addition of
realm_default_external_accounts to the `/register-queue` response.
Also adds a Changes note to the field in the endpoint's response
API documentation.
The original commit that added it to that endpoint's response was
commit d7ee2aced1.
The default for Javascript reporting is that Sentry sets the IP
address of the user to the IP address that the report was observed to
come from[^1]. Since all reports come through the Zulip server, this
results in all reports being "from" one IP address, thus undercounting
the number of affected unauthenticated users, and making it difficult
to correlate Sentry reports with server logs.
Consume the Sentry Envelope format[^2] to inject the submitting
client's observed IP address, when possible. This ensures that Sentry
reports contain the same IP address that Zulip's server logs do.
[^1]: https://docs.sentry.io/platforms/python/guides/logging/enriching-events/identify-user/
[^2]: https://develop.sentry.dev/sdk/envelopes/
Updates the descriptions and examples for there only being two key
values: "website" and "aggregated".
Also, clarifies that email keys are the Zulip display email.
And removes any descriptive text that says presence objects have
information about the clients the user is logged into.
Deleting a message can race with sending a push notification for it.
b47535d8bb handled the case where the Message row has gone away --
but in such cases, it is also possible for `access_message` to
succeed, but for the save of `user_message.flags` to fail, because the
UserMessage row has been deleted by then.
Take a lock on the Message row over the accesses of, and updates to,
the relevant UserMessage row. This guarantees that the
message's (non-)existence is consistent across that transaction.
Partial fix for #16502.
As of commit 38f6807af1, we accept only stream and user IDs for
the recipient information for scheduled messages, which means we
can simplify the type for `message_to` in `check_schedule_message`.
In commit 38f6807af1, we updated the `POST /scheduled_messages`
endpoint to only accept user IDs for direct messages. The endpoint
alread only accepted a stream ID for stream messages.
But the API documentation was not updated for the errors returned
when either a stream or user with the specified ID does not exist.
Updates the API documentation for the correct error responses.
Realm exports may OOM on deployments with low memory; to ensure
forward progress, log the start time in the RealmAuditLog entry, and
key off of the existence of that to prevent re-attempting an export
which was already tried once.
We previously hard-coded 6 threads for the realm export; in low-memory
environments, spawning 6 threads for an export can lean to an OOM,
which kills the process and leaves a partial export on disk -- which
is then tried again, since the export was never completed. This leads
to excessive disk consumption and brief repeated outages of all other
workers, until the failing export job is manually de-queued somehow.
Lower the export to only use on thread if it is already running in a
multi-threaded environment. Note that this does not guarantee forward
progress, it merely makes it more likely that exports will succeed in
low-memory deployments.
This makes it less likely we will accidentally fail to include a class
if the subclassing of QueueProcessingWorker changes, and lets mypy
more accurately understand the typing.
We now allow users to change email address visibility setting
on the "Terms of service" page during first login. This page is
not shown for users creating account using normal registration
process, but is useful for imported users and users created
through API, LDAP, SCIM and management commands.
We now set tos_version to "-1" for imported users and the ones
created using API or using other methods like LDAP, SCIM and
management commands. This value will help us to allow users to
change email address visibility setting during first login.
For cases of `name=value` in descriptions in the API documentation,
update to either use JSON style of `"name": value` when correct or
revise the descriptive text.
Creates a custom linter rule for `zerver/openapi/zulip.yaml` to
only allow lowercase versions of "true", "false" and "null".
Updates existing documentation for new rules.
With the private messages -> direct messages migration, we should
rename the "Starting a new private thread" help center article.
- Renames article to "Starting a new direct message"
- Updates relevant section in /help/getting-started-with-zulip
- Fixes typo in /help/send-group-dm
- Updates file names and adds URL redirect.
Fixes#25506.
Backfill subscription realm audit log SUBSCRIPTION_CREATED events for
users which are currently subscribed but don't have any subscription
events, presumably due to some historical bug. This is important
because those rows are necessary when reactivating a user who is
currently soft-deactivated.
For each stream, we find the subscribed users who have no
subscription-related realm audit log entries, and create a
`backfill=True` subscription audit log entry which is the latest it
could have been, based on UserMessage rows. We then optionally insert
a `DEACTIVATION` if the current subscription is not active.
Earlier when a user who is not allowed to add subscribers to a
stream because of realm level setting "Who can add users to streams"
is subscribing other users while creating a new stream than new stream
was created but no one is subscribed to stream.
To fix this issue this commit makes changes in the API used
for adding subscriptions. Now stream will be created only when user
has permissions to add other users.
With a rewrite of the test by Tim Abbott.
The immediate application of this will be for SAML SP-initiated logout,
where information about which IdP was used for authenticating the
session needs to be accessed. Aside of that, this seems like generally
valuable session information to keep that other features may benefit
from in the future.
This is nicer that .pop()ing specified keys - e.g. we no longer will
have to update this chunk of code whenever adding a new key to
ExternalAuthDataDict.
Adds the `failed` boolean from the ScheduledMessage to the API dict
returned by scheduled message events and register response, and by
fetching the user's scheduled messages.
`failed` will only be true when the server has tried to send the
scheduled message and failed due to an error.
In the case of a user editing a scheduled message that the server
had failed to send at the scheduled time due to an error, we want
to update the `failed` and `failure_message` fields as the intent
is for the server to retry to send the scheduled message based on
the updated information provided by the user.
In the case that there is an error when sending a scheduled message,
we now send a message from the notification bot to the user who
scheduled the message about the failure/error.
The notification message is not sent if the error when sending the
scheduled message was due to the realm or sender being deactivated.
This commit adds a new test to check how the visibility policy updates
when moving messages to a topic that didn't exist previously.
This test also helps us adding coverage for the code which just
skips setting visibility_policy if there is no need to update the
value because both previous and new value of visibility policy
is INHERIT. The "actions/message_edit.py" file has 100% coverage
now and thus is removed from "not_yet_fully_covered" list.
The code for updating visibility policy values on moving messages
had two bugs.
- There was a typo in elif condition where "user_profile" was being
used instead of "user_profile_with_policy".
This commit fixes the typo.
- It was assumed that there would be no UserTopic rows for target
topic if the target topic didn't exist. But there can be such case
where some messages were sent to that topic and the user muted
the topic. But then the messages in that topic was deleted. In
such case there can be UserTopic rows for a stream-topic pair
that does not exist.
This commit fixes the code to handle such case as well and set
the visibility policy of new topic to what was set for the original
topic. This change simplifies the condition to just check whether
new_visibility_policy is equal to target_topic_visibility_policy
and skip if so, and update the visibility policy otherwise.
Due to this change, we now do not try to mute the already muted
topic if the topic is moved to a topic which didn't exist
previously and thus we modify the existing test to not expect
any INFO logs.
This commit adds tests to cover the case of message editing
not allowed due to allow_message_editing set to False and
the case when there is no limit set when moving all messages
in a topic.
The "actions/message_edit.py" file does not have 100% coverage
still and it will be addressed in the next commit.
We do not pass "email_address_visibility" to do_create_realm
anymore. It was passed before to set the setting for realms in
development database, but it has been changed since we changed
email_address_visibility to be a user-level setting instead
of realm-level setting since now it is set on RealmUserDefault
table.
Adds test coverage for the error sent for editing a scheduled
message that was successfully sent.
`zerver/actions/scheduled_messages.py` now has 100% test coverage
again.
We were missing a few checks for raw_unread_msgs being present before
trying to parse and update it.
The test only covers 2/3 of the cases, but I wasn't convinced it was
worth adding another test just for the corner case of removing a
message flag; this seems fairly unlikely to regress.
We now allow users to invite without specifying any stream to join.
In such cases, the user would join the default streams, if any, during
the process of account creation after accepting the invite.
It is also fine if there are no default streams and user isn't
subscribed to any stream initially.
We do not add user to the default streams if the streams list passed
while sending the invite (both email and multi-use) was empty since
invite explicitly selected to not subscribe the user to default
streams.
Previously, it seemed possible for the scheduled messages API to try
to send infinite copies of a message if we had the very poor luck of a
persistent failure happening after a message was sent.
The failure_message field supports being able to display what happened
in the scheduled messages modal, though that's not exposed to the API
yet.
The previous logic would attempt to send a large number of unrelated
messages in a single transaction, which is just asking for trouble in
the event that one of the attempts fails.
For scheduled stream messages, we already limited the `to`
parameter to be the stream ID, but here we return a JsonableError
in the case of a ValueError when the passed value is not an integer.
For scheduled direct messages, we limit the list for the `to`
parameter to be user IDs. Previously, we accepted emails like
we do when sending messages.