This commit adds create_web_public_stream_policy
field to Realm table which controls the roles that
can create web-public streams and by default its
value is set to POLICY_OWNERS_ONLY.
This commit enforces invite_only argument to be named
in _test_user_settings_for_creating_streams. This will
help in improving readability especially when we will
add is_web_public argument in further commits.
This commit adds tests for POLICY_EVERYONE and POLICY_NOBODY
in check_has_permission_policies test. The original code
used these values but these were not covered in test.
We send three events when changing delivery email of a user - one
for updating the delivery_email field of user, one for avatar url
change, and one for changing email field if email_address_visibility
is set to 'EMAIL_ADDRESS_VISIBILITY_EVERYONE'.
There is already a test for delivery_email and avatar_url event with
the visibility setting set to 'EMAIL_ADDRESS_VISIBILITY_ADMINS_ONLY',
but no test for verifying the email update event sent when email
address is public, so this commit adds a test for checking the schema
of event for updating email field.
When email_address_visibility is changed and either the old value
or the updated value is EMAIL_ADDRESS_VISIBILITY_EVERYONE then
email field of all users is updated and we also send the corresponding
event to clients. But apply_event code did not update the data on
receiving the event, so this commit fixes the code to correctly
handle the event in apply_event.
(We also use this event when just changing a user's email address).
This commit also adds the tests and openapi schema for the event.
We use the lists defined in models.py like Realm.COMMON_POLICY_TYPES,
Realm.COMMON_MESSAGE_POLICY_TYPES, etc. in do_set_realm_property_test
instead of using defining list there (eg - [4, 3, 2, 1]). We do the
same thing in do_set_realm_property_test in test_realm.py.
We skip email_address_visibility values in this commit because it
requires some change in openapi schema as well.
Since the calls to the translation function `_()` are made outside
of the `send_message_moved_breadcrumbs` function, these strings are
translated outside of the `with override_language` block, leading to
translated strings even when we don't intend them to be translated.
We now use gettext_lazy with appropriate testing to avoid this.
Zulip attempts to validate that the regular expressions that admins
enter for linkifiers are well-formatted, and only contain a specific
subset of regex grammar. The process of checking these
properties (via a regex!) can cause denial-of-service via
backtracking.
Furthermore, this validation itself does not prevent the creation of
linkifiers which themselves cause denial-of-service when they are
executed. As the validator accepts literally anything inside of a
`(?P<word>...)` block, any quadratic backtracking expression can be
hidden therein.
Switch user-provided linkifier patterns to be matched in the Markdown
processor by the `re2` library, which is guaranteed constant-time.
This somewhat limits the possible features of the regular
expression (notably, look-head and -behind, and back-references);
however, these features had never been advertised as working in the
context of linkifiers.
A migration removes any existing linkifiers which would not function
under re2, after printing them for posterity during the upgrade; they
are unlikely to be common, and are impossible to fix automatically.
The denial-of-service in the linkifier validator was discovered by
@erik-krogh and @yoff, as GHSL-2021-118.
This removes a false-positive ReDoS, since the input is always
checked-in code. It also incidentally refactors to make the regexes
be more explicit about the values they expect, and removes unnecessary
capturing groups.
It removes an optional parenthesized status code for fixtures,
unnecessary since 981e4f8946, as well as
optional key-value language options, unnecessary since
a2be9a0e2d.
Thank you to @erik-krogh and @yoff for bringing this to our attention.
This fixes the issue where 'None' would appear in the rendered
html in case of a missing tab display_name. Now,
'test-help-documentation' will fail in case of any tab display_name
being missing.
In case of a tab_section with no tabs, currently a single tab with
the name 'null_tab' gets added. Added the display name 'None' for
'null_tab', to keep in line with the existing behaviour.
Fixes#19822
This makes our onboarding guide for education organizations much
simpler, since new organizations will start with these settings
correctly configured.
Fixes#19682
Users wanted a feature where they could specify
which users can create public streams and which users can
create private streams.
This splits stream creation code into two parts,
public and private stream creation.
Fixes#17009.
This commit replaces 'allow_message_deleting' boolean setting
with an integer setting 'delete_own_message_policy'. We have a
separate dropdown now for deciding which user-roles can delete
messages sent by themselves and the time-limit setting droddown
is different.
This new setting has two options - everyone and admins only. Other
options including moderators will be added further.
We also remove the "Never" option from the original time-limit
dropdown, as admins are always allowed to delete message. This
never option resembled the case of only admins being allowed to
delete but this state is now resembled by setting the dropdown
to "admins only" and we also disable the time-limit dropdown in
this case as admins are allowed to delete irrespective of limit.
Note, this setting is only for deleting messages sent by the
deleting user themselves, and only admins are allowed to delete
messages sent by others as before.
We make zero invalid value for message_content_delete_limit_seconds and
for handling the case of "Allow to delete message any time", the API-level
value of message_content_delete_limit_seconds is "anytime" and "None"
as the DB-level value. We also use these values for message retention
setting, so it helps maintain consistency.
This commit does not remove the 'enable_login_emails' field from
RealmUserDefault table but it is just not used and cannot be
changed from UI or API similar to 'enable_marketing_emails' setting.
This is a somewhat subtle function, that deserves a few comments
explaining subtle details of its logic, and there's no good reason to
have multiple copies of that logic that are slightly inconsistent.
Because the main changes here are just checking for invariant
failures, the behavioral change here should be limited to ensuring
deactivated streams are not considered available even if they were
tagged as web public streams before deactivation.
This fixes a problem where we could not import zerver.lib.streams from
zerver.lib.message, which would otherwise be reasonable, because the
former implicitly imported many modules due to this issue.
Requests to the root subdomain weren't getting request_notes.realm set
even if a realm exists on the root subdomain - which is actually a
common scenario, because simply having one organization, on the root
subdomain, is the simplest and common way for self-hosted deployments.
This reverts commit cd93d0967f.
This check_or is redundant with check_union; it gives a misleading
error message for the non-matching case; and it has no type safety.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
In maybe_send_resolve_topic_notifications, since the calls to the
translation function `_()` are made outside of the `override_language`
block, the strings are not translated correctly.
This commit refactors the function to make sure that the translation
happens in the right block of code.
Fixes#19730.
Apparently, our slack compatible outgoing webhook format didn't
exactly match Slack, especially in the types used for values. Fix
this by using a much more consistent format, where we preserve their
pattern of prefixing IDs with letters.
This fixes a bug where Zulip's team_id could be the empty string,
which tripped up using GitLab's slash commands with Zulip.
Fixes#19588.
This commit removes the existing default_twenty_four_hour_time field in
Realm table which was used to set the twenty_four_hour_time setting of
new user on joining and instead we now use the twenty_four_hour_time
field of RealmUserDefault table for the same.
With some tweaks by tabbott to clarify the documentation.
These values are currently either a string already or a List[int]. We
should do the conversion in
do_update_user_custom_profile_data_if_changed properly: if the value is
already a string, it can be used directly - if it's not, orjson.dumps is
a more future-proof way of converting than str(). Using orjson.dumps
here also allows us to change the converter of the USER type
CustomProfileField to orjson.loads, which is nicer to have than
ast.literal_eval.
While orjson.dumps() and str() give the same output when
given the special case of List[int],
ast.literal_eval was previously used due to orjson.loads not being
a good inverse function to str in general. That gets straightened out
now.
None of the existing custom profile field types have the value as an
integer like declared in many places - nor is it a string like currently
decalred in types.py. The correct type is Union[str, List[int]]. Rather
than tracking this in so many places throughout the codebase, we add a
new ProfileDataElementValue type and insert it where appropriate.
The old assignment is incorrect - field_value.value is a TextField() and
should always be a string. This didn't strictly break anything, because
django converts the value to a string when .save()ing to the db, but
field_value.value persists as a non-string for the rest of this
codepath. After fixing this, the small codeblock in
notify_user_update_custom_profile_data handling conversion of
field_value.value to a string becomes redundant.
We're assured that we're not breaking event format by the test
test_custom_profile_field_data_events in test_events.py.
Send update event to client after a stream is made web public.
This has been documented in the API documentation since feature level
73; previously the value was always false.
We allow clients to make existing streams web public via the API.
This feature is still disabled via settings in production
environments, because we may have additional policy rules or UI
warnings we wish to add to this sort of conversion.
User can now create web public stream via the /subscribe API.
So, when a web public stream present in the API request does not
exist, it will be created now by specifying the is_web_public
parameter. The parameter would have been ignored without this
commit.
The new error message is more clear about why, "User cannot create
stream with this settings." was bad English, and in any case removing
an unnecessary string is always an improvement for translators.
This new setting both serves as a guard to allow us to merge API
support for web public streams to main before we're ready for this
feature to be available on Zulip Cloud, and also long term will
protect self-hosted servers from accidentally enabling web-public
streams (which could be a scary possibility for the administrators of
a corporate Zulip server).
Recently, we discovered that our settings_tab/relative Markdown
directives didn't work when they were in a macro that was included
in another Markdown file. Note that without this commit, the
/help/create-your-organization-profile page is broken. This commit
changes the respective priorities of these two extensions such that
these directives are rendered *after* the macro is included in
another file.
Thanks to Alya Abbott for reporting this bug!
All of our custom Markdown extensions have priorities that govern
the order in which the preprocessors will be run. It is more
convenient to have these all in one file so that you can easily
discern the order at first glance.
Thanks to Alya Abbott for reporting the bug that led to this
refactoring!
This is a follow-up to #19388.
We will in the future allow patch requests to change the visibility
of an existing topic, so `last_updated` is better name for this field.
This commit does not affect the API or events in any way, but only the
database.
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
The test now uses submit_reg_form_for_user, meaning a blank
full_name is posted to /accounts/register/ rather than the
parameter being excluded.
Fixes part of #7564
I had to pass stop_after_reg_form=True, as the call to get_user in
verify_signup fails. I am not sure whether this is the expected
behavior. Also this causes the test to use submit_reg_form_for_user,
meaning a blank password is posted to /accounts/register/ rather than
no password.
Fixes part of #7564
get_user_by_delivery_email should be used, given that the email variable
is the realm email address that the account is being created with, not
the .email field which can be a dummy address based on org settings.
Currently we used to redirect to /new when the user click on buy
standard from the root domain. Instead we redirect to /upgrade page.
The /upgrade page redirect would ask user to enter the subdomain
of their organization and would then redirect them to /upgrade
page of their organization.
This better matches the title of the page and more generally our
conventions around naming /help/ articles. We include a redirect
because this is referenced from Welcome Bot messages, and we
definitely don't want those links to break.
This parallels fe25517295, but for mobile notifications. It also
adds a test, which verifies that such content does not crash either
mobile or email notifications.
fe25517295 adjusted the email_notifications codepath to use
`lxml.html.fragment_fromstring` method when parsing
`rendered_content`, but left the tests using a helper which called
`fromstring`.
Switching the tests to match the code as run reveals a bug -- using
`drop_tree` on all `message_inline_image` classes now _does_ remove
all of a top-level image-URL-only message. Previously, such messages
were "safe" from the block that calls `drop_tree` only by dint of
`drop_tree` being a silent no-op for the root element. When parsed
using `fragment_fromstring`, they are no longer the root, and as such
an empty message results.
Reorder relative_to_full_url to check for only one `message_inline_image`
within the top `<div>`, and only run the `drop_tree` path in the
alternate case. Tests must be adjusted for their output now including
one more layer of `<div>`.
The previous commit introduced logging of attempts for username+password
backends. For completeness, we should log, in the same format,
successful attempts via social auth backends.
Our convention is to always have authenticate() called with a request
object. We need to be consistent with that in tests too, to avoid test
failures resulting from breaking that assumption.
We modify assert_login_failure to call client.login() in the same way as
the other similar helpers - with a properly initialized HttpRequest
instance.
Now, when we add a custom animated emoji to the realm
we also save a still image of it (1st frame of the gif). So
we can avoid showing an animated emoji every time.
create_confirmation_link has validity time as an optional argument,
because it has reasonable defaults. Thus it's a better API for
do_send_confirmation_email to make this optional as well, allowing
relying on create_confirmation_link's defaults.
This extends the invite api endpoints to handle an extra
argument, expiration duration, which states the number of
days before the invitation link expires.
For prereg users, expiration info is attached to event
object to pass it to invite queue processor in order to
create and send confirmation link.
In case of multiuse invites, confirmation links are
created directly inside do_create_multiuse_invite_link(),
For filtering valid user invites, expiration info stored in
Confirmation object is used, which is accessed by a prereg
user using reverse generic relations.
Fixes#16359.
The API for changing the batching period was added in
5db4fe8652.
This is a follow up to that commit. We also update the timestamps for
existing scheduled email notifications entries so that the effect of
changing the setting is immediate.
Part of #15280
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
These hostnames only have MX records for Mailgun and Front, and will
not work as a Zulip organization.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit moves check_settings_values to user_settings.py
from validator.py such that we can import the functions at
the top without any issue of cyclic imports.
We do not allow mentioning system user groups for now
because this can lead to circumventing the wildcard
mention restrictions. It will be enabled once we add
a setting to control that.
This is implemented by just ignoring it as one of the
mentioned user group even if the message content
inlcudes the mention syntax for it and the message
is sent normally.
We still keep the for_mention parameter for accessing
user group while sending email and push notifications
as mentioning system user groups will be allowed in
future.
This commit also removes the test for email notifications
for system user groups as we are not allowing mentioning
them.
This commit is only for backend change as we already
exclude the system groups from mention typeaheads and
other UI.
The name of the new realm created as a tombstone after renaming
a realm's subdomain is the constant 'placeholder-realm'.
This would confuse the user when shown the deactivation notice
and asking to join the realm at a new subdomain.
This PR replaces it with the original realm name to avoid confusion.
Fixes: #19677
This commit modifies the copy_user_settings code such that instead
of source user profile, we can have two types of sources - a user
profile and RealmUserDefault table of realm and then set the
settings from RealmUserDefault only is there is no user profile
as a source.
We also rename copy_user_settings to copy_default_settings for
clarity.
This commit adds do_set_realm_user_default_setting which
will be used to change the realm-level defaults of settings
for new users.
We also add a new event type "realm_user_settings_defaults"
for these settings and a "realm_user_settings_default" object
in '/register' response containing all the realm-level default
settings.
Because we create all realms with do_create_user (including in the
test suite), we just need to change that function, add a migration for
existing realms, and ensure the data import code path correctly
creates these objects.
Note that the import code path will create a RealmUserDefault row with
default values if it is not present in the import data, which is
important for importing data from other tools like Slack.
This commit changes the type of enable_marketing_emails parameter of
create_user to Optional[bool].
The value of this parameter will be None in certain cases when user
registers through SSO and 'TERMS_OF_SERVICE=False' when there will
be no registration form and thus no value of enable_marketing_emails.
We set the enable_marketing_emails setting after copying user
settings to override the value selected in registration form.
This change is also necessary because enable_marketing_emails
field is present in RealmUserDefault to avoid copying code
but we do not use this value actually and instead we want
the setting to be set according to the value in registration
form.
We set this setting only for non-bot users since we generally
do not set any settings for bots.
We extract the checks for default_language, notification_sound,
and email_notifications_batching_period_seconds setting values
in json_change_settings to a new function check_settings_values.
This prevented migration 0345
(517c2ed39d / #19696) from applying on
systems that were created after the refactoring that resulted in the
system bot realm potentially having null as its name.
(We've already confirmed that normal realms, created via
`do_create_realm`, shouldn't be able to have this unusual state).
This check was copied from upstream python-markdown's "safe mode"
before they removed that feature. The upstream history is that they
introduced this check in
2db5d1c8e4,
which was not a complete security check, and then added the
immediately following check (with an allowlist of schemes) in
0b4ffbb60e.
Their first, incomplete check provides no security benefit and makes
the code hard to reason about, so we remove it.
The 'update_global_notifications' type event is sent only for
existing settings and will not be sent for new settings, so we
should use notification_settings_legacy dict to check the type
of setting value in check_update_global_notifications instead
of notification_settings_types dict.
We still used notification_setting_types in copy_user_settings
function of create_user.py and in a test in test_event_system.py.
It is not required to do so since we have added all settings in
property_types already and we loop over property_types at both
these places which includes all settings.
This was likely initiall created with null=True in
5c5ffd6ea3 just because we didn't have a
plan for backfilling this field, but I verified that Zulip Cloud has
no realms without a name set, and that's the place most likely to have
any form of super-legacy nameless realms.
So we can clean up this aspect of the data model without a special
migration to do something with existing realms with name=None (which I
suspect would have resulted in a 500 anyway).
We already test all the notification settinsg in
test_toggling_boolean_display_settings (which is
now renamed to test_toggling_boolean_user_settings)
as all settings are now moved to property_types and
we are merging other parts also to consider all the
settings under one category.
This commit adds a separate test for invalid values of user settings
and remove the existing code for it in test_change_user_setting.
This change will enable us to merge the tests for notification
settings to this because email_notification_batching_period_settings
has different invalid values than other integer values and we do the
same for realm settings also.
This commit adds `demo_organization_scheduled_deletion_date` to
the `realm` section of the `/register` response so that it is
available to clients when enabled.
This is a part of #19523.
This fixes a regression where one could end up deactivating all owners
of a realm when trying to synchronize LDAP with the `is_realm_admin`
flag configured in `AUTH_LDAP_USER_FLAGS_BY_GROUP`.
With tweaks by tabbott to add is_moderator as well.
Fixes#18677.
Since 84742a0, all settings are sent in the `user_settings` dictionary
which were previously sent inline with other fields in /register
response.
In order to simplify the process of adding new personal settings, we
want to transition to a world where new settings only need to consider
the `property_types` object, and code that needs to reference the
legacy behavior interacts with an object with `legacy` in its name.
This way, contributors working on new settings don't need to think
about the legacy code paths at all.
See https://chat.zulip.org/#narrow/stream/378-api-design/topic/user.20settings.20response.20in.20.2Fregister
to understand this better.
The commit:
1. Adds the new field as nullable.
2. Adds code that'll create new Confirmation with the field set
correctly.
3. For verifying validity of Confirmation object this still uses the old
logic in get_object_from_key() to keep things functioning until we
backfill the old objects in the next step.
Thus this commit is deployable. Next we'll have a commit to run a
backfill migration.
An integer or no argument is supposed to be passed.
These weren't caught by mypy because booleans are integers in python,
see https://github.com/python/mypy/issues/1757
This will be used to check if the narrow being requested by
spectator requires authentication without requesting the server.
Having this check locally, makes this process look snappy to
the user and doesn't result in 404s in the browser log.
aioapns already has a retry loop. By default it retries forever on
ConnectionError and ConnectionClosed, so our own retry loop would
never be reached. Remove our retry loop, and configure aioapns to
retry APNS_MAX_RETRIES times on ConnectionError like the previous
version did. It still retries forever on ConnectionClosed; that’s not
configurable but probably fine.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This utilizes the generic `BaseNotes` we added for multipurpose
patching. With this migration as an example, we can further support
more types of notes to replace the monkey-patching approach we have used
throughout the codebase for type safety.
The motive of adding `BaseNotes` was to support monokey patching
temporary attributes to objects (such as `.trigger` on `Message`) when
working on the django-stubs migration in #18777.
This way, we no longer have to manually keep the upload path code in
sync with the upload path code in zerver/lib/upload.py.
This was originally suggested in
https://github.com/zulip/zulip/pull/19478#issuecomment-911479530.
This change fixes a bug when importing into a server using the local
file uploads backend, where the `import_realm.py` copy wasn't using
our standard 256-directory approach to avoid putting too many files in
a single directory.
de04f0ad67 changed now notifications recipients were calculated, in
a manner that caused them to be sent when they should not have been.
ac70a2d2e1 was supposed to resolve this, but appears to have been
insufficient, as all three of these cases have been observed to still
happen.
Add safety checks immediately before notification, until the
underlying logic error can be sussed out.
This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
We move the emojiset_choices method from UserProfile class to
UserBaseSettings class because emojiset_choices exists in
UserBaseSettings class and this would be used for realm-level
settings as well along with existing user-level settings.
We rename the user group in the example for 'GET /user_groups'
with is_system_group=True, to be 'Moderators' as is_system_group
will be set to True for role-based user groups only.
The default is kept as no retries. Since retries with exponential
backoff are a good thing to make easy, the int form defaults to
setting a backoff_factor.
Unfortunately, urllib3 retry backoff does not implement jitter.
Switching this to use the `backoff` library[1] rather than urllib3's
native Retry is left as future extension.
[1] https://pypi.org/project/backoff/
This adds the X-Smokescreen-Role header to proxy connections, to track
usage from various codepaths, and enforces a timeout. Timeouts were
kept consistent with their previous values, or set to 5s if they had
none previously.
This commits removes some unnecessary checks for `self.md.zulip_message`,
which were put there historically, as earlier we used to add the additional
properties like mentions_user_ids, alert_words, etc. to Message dict
only. These were later moved to MessageRenderingResult class in commit
75cea329b but the checks weren't removed.
This is important because while rendering the messages imported from
other chat tools (like Rocket.Chat), the Message dict is not passed to
the markdown, due to which the checks for `self.md.zerver_message` fails
and hence, things like user mentions, stream/topic mentions are not
rendered in the imported messages properly.
The `user_activity_interval` worker calls:
```python3
last = UserActivityInterval.objects.filter(user_profile=user_profile).order_by("-end")[0]
`````
Which results in a query like:
```sql
SELECT "zerver_useractivityinterval"."id", "zerver_useractivityinterval"."user_profile_id", "zerver_useractivityinterval"."start", "zerver_useractivityinterval"."end" FROM "zerver_useractivityinterval" WHERE "zerver_useractivityinterval"."user_profile_id" = 12345 ORDER BY "zerver_useractivityinterval"."end" DESC LIMIT 1
```
For users which have at least one matching row, this results in a
query plan like:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=0.078..0.078 rows=1 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=0.077..0.078 rows=1 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: 98
Planning Time: 0.059 ms
Execution Time: 0.088 ms
```
But for users that have just been created, with no matching rows, this
is considerably more expensive:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=10798.146..10798.146 rows=0 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=10798.145..10798.145 rows=0 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: (count of every single row in the table, redacted)
Planning Time: 0.053 ms
Execution Time: 10798.158 ms
```
Regular vacuuming can force the use of the index on `user_profile_id`
as long as there are few enough users, which is fast -- however, at
some point, the query planner decides that is insufficiently specific,
always chooses the effective-whole-table-scan.
Add an index on `(user_profile_id, end)`, which is expected to be
sufficiently specific that it is used even with large numbers of user
profiles.
Ref #19250.
The transforms called from `build_message_payload` use
`lxml.html.fromstring` to parse (and stringify, and re-parse) the HTML
generated by Markdown. However, this function fails if it is passed
an empty document. "empty" is broader than just the empty string; it
also includes any document made entirely out of control characters,
spaces, unpaired surrogates, U+FFFE, or U+FFFF, and so forth. These
documents would fail to parse, and raise a ParserError.
Using `lxml.html.fragment_fromstring` handles these cases, but does by
wrapping the contents in a <div> every time it is called. As such,
replacing each `fromstring` with `fragment_fromstring` would nest
another layer of `<div>`.
Instead of each of the helper functions re-parsing, modifying, and
stringifying the HTML, parse it once with `fragment_fromstring` and
pass around the parsed document to each helper, which modifies it
in-place. This adds one outer `<div>`, requiring minor changes to
tests and the prepend-sender functions.
The modification to add the sender is left using BeautifulSoup, as
that sort of transform is much less readable, and more fiddly, in raw
lxml.
Partial fix for #19559.
This is a roundabout way to appease a semgrep complaint about
‘error_msg = error_msg % (string_id,)’ while also improving the code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
9ac55a8cf6 introduced support for
batch updates to stories. However, that commit didn't skip label
removals, as we already do in non-batch story payloads. This led
to an exception for batch story update payloads where labels were
removed but none were added.
maybe_send_batched_emails handles batches of emails from different
users at once; as it processes each user's batch, it enqueues messages
onto the `email_senders` queue. If `handle_missedmessage_emails`
raises an exception when processing a single user's email, no events
are marked as handled -- including those that were already handled and
enqueued onto `email_senders`. This results in an increasing number
of users being sent repeated emails about the same missed messages.
Catch and log any exceptions when handling an individual user's
events. This guarantees forward progress, and that notifications are
sent at-most-once, not at-least-once.
This commit indicates that the realm_message_retention_days field can have
a special value, similar to its stream counterpart, and also explains how
the special value changed over different server versions.
With an extension from tabbott to double-enter the changelog entry.
Related discussion: https://chat.zulip.org/#narrow/stream/378-api-design/topic/realm_message_retention_days
The ability to use multiple ports has been removed a long time ago.
And the "optional" note in the help message is in fact incorrect
since `addrport` being `None` is not supported.
We do not allow any user to edit the system user groups (including
renaming, deleting, adding or removing members, etc.) from the
API. These user groups will change only by the code when a new
user is added or role of a user is changed.
This is implemented by rejecting access_user_group_by_id always
except the case when it is use to get the user group for sending
email and push notifications, as we would need to send notifications
to the mentioned user group.
We make the description parameter in create_user_group as keyword-only
to improve readability. We would also keep the is_system_group
parameter which will be added in future keyword-only.
Tuples cannot be deserialized from JSON.
While we do use these validators for other things, like event
dictionaries, we have migrated the API away from using those. The
last use was removed in 4f3d5f2d87
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These changes are all independent of each other; I just didn’t feel
like making dozens of commits for them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The auth attempt rate limit is quite low (on purpose), so this can be a
common scenario where a user asks their admin to reset the limit instead
of waiting. We should provide a tool for administrators to handle such
requests without fiddling around with code in manage.py shell.
Calling `email.save()` is only needed if we altered `email.address`;
it is unnecessary if we called `email.users.add(...)` which will have
done its own INSERT.
This fixes two bugs: the most obvious is that there is a race where a
ScheduledEmail object could be observed in the window between creation
and when users are added; this is a momentary instance when the object
has no users, but one that will resolve itself.
The more subtle is that .save() will, if no records were found to be
updated, _re-create_ the object as it exists in memory, using an
INSERT[1]. Thus, there is a race with `deliver_scheduled_emails`
between when the users are added, and when `email.save()` runs:
1. Web request creates ScheduledEmail object
2. Web request creates ScheduledEmailUsers object
3. deliver_scheduled_emails locks the former, preventing updates.
4. deliver_scheduled_emails deletes both objects, commits, releasing lock
5. Web request calls `email.save()`; UPDATE finds no rows, so it
re-creates the ScheduledEmail object.
6. Future deliver_scheduled_emails runs find a ScheduledEmail with no
attending ScheduledEmailUsers objects
Wrapping the logical creation of both of these in a single transaction
avoids both of these races.
[1] https://docs.djangoproject.com/en/3.2/ref/models/instances/#how-django-knows-to-update-vs-insert
Only clear_scheduled_emails previously took a lock on the users before
removing them; make deliver_scheduled_emails do so as well, by using
prefetch_related to ensure that the table appears in the SELECT. This
is not necessary for correctness, since all accesses of
ScheduledEmailUser first access the ScheduledEmail and lock it; it is
merely for consistency.
Since SELECT ... FOR UPDATE takes an UPDATE lock on all tables
mentioned in the SELECT, merely doing the prefetch is sufficient to
lock both tables; no `on=(...)` is needed to `select_for_update`.
This also does not address the pre-existing potential deadlock from
these two use cases, where both try to lock the same ScheduledEmail
rows in opposite orders.
No codepath except tests passes in more than one user_profile -- and
doing so is what makes the deduplication necessary.
Simplify the API by making it only take one user_profile id.
This fixes a bug where email notifications were sent for wildcard
mentions even if the `enable_offline_email_notifications` setting was
turned off.
This was because the `notification_data` class incorrectly considered
`wildcard_mentions_notify` as an indeoendent setting, instead of a wrapper
around `enable_offline_email_notifications` and `enable_offline_push_notifications`.
Also add a test for this case.
While the STREAM_LINK_REGEX and STREAM_TOPIC_LINK_REGEX
identifies the stream and topic mentions in the content
correctly (tested by printing out the matches), the
stream/topic mentions are still not linked to the
corresponding streams/topics for imported messages, as
a `zulip_message` instance is required for linking these
mentions to actual streams/topics (see `StreamPattern`
class in `markdown/__init__.py`) which is not provided
while processing the markdown for imported messages.
This commit updates both the stream-level and realm-level message
retention setting to use 'unlimited' instead of 'forever' to set
message retention setting to "retain messages forever".
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'message_retention_days' and others will
be removed separately.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.
Previously, the output would make it look like we sent an actual email
to the first user in the dry_run output, which is very confusing.
The `dry_run` code path already prints all the accounts that would
have been emailed at the end, so there's no reason to have this line
before the dry_run check.
Additionally, we move after the `get_connection` check because
failures at that stage shouldn't result in logging an attempt to send
an email.
This way we can stop reading as soon as we get to the body. Also,
send an Accept header, check that the request was actually successful,
use lxml.etree.iterparse instead of a broken hand-rolled state
machine, and support XHTML, all for negative 28 lines of code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While it should be an invariant that message.rendered_content is never
None for a row saved to the database, it is possible for that
invariant to be violated, likely including due to bugs in previous
versions of data import/export tools.
While it'd be ideal for such messages to be rendered to fix the
invariant, it doesn't make sense for this has_link migration to crash
because of such a corrupted row, so we apply the similar policy we
already have for rendered_content="".
We rework the landing page for companies in the same way we've
recently revamped the landing pages for other use cases.
This implementation unfortunately duplicates a lot of content from
/plans; we should clean that up at some point.
This reverts commit 1965584eec.
This syntax has a bad interaction with table syntax and needs to be
rethought.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Slack bot emails generated by us can be duplicate for two bots.
If such a case occur, append a counter to the email to make it
unique.
For maintaining the counter of duplicate emails and the final
email assigned to each bot, a class based approach is used with
static variables and static (class) methods. This keeps all the
data related to slack bot emails at the same place and easily
accessible from anywhere inside the module (without defining any
class object and passing it around).
Fixes: #16793
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
This is more efficient than get_lexer_by_name, since we don’t need to
instantiate the class just to get its name.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The BlockingChannel annotations in TornadoQueueClient were flat-out
wrong. BlockingChannel and Channel have no common base classes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is effectively a step closer to what was proposed in
https://github.com/zulip/zulip/pull/18678#discussion_r644490540 when
this code was written in #18678.
If the Customer object has neither of a Stripe id, nor any historical
plans, then there's no real billing association contained in the
existence of the Customer object, and it's safe to delete.
This fixes a regression in de04f0ad67.
We'll do a proper test in a follow-up commit; this is a quick fix to
make sure master works.
The emails will bounce, but it'll create all sorts of infrastructure
headaches.
We added "user_settings" object containing all the user settings in
previous commit. This commit modifies the code to send the existing
setting fields in the top-level object only if user_settings_object
client_capabilities field is False.
This commit adds "user_settings_object" field to
client_capabilities which will be used to determine
if the client needs 'update_display_settings' and
'update_global_notifications' event.
We send a event with type 'user_settings' on updating user's display
and notification settings.
The old event types - 'update_global_notifications' and
'update_display_settings', are still supported for backwards
compatibility.
We do not require separate tests for checking events when changing
"enable_drafts_synchronization" as we already do this in the display
settings test because this setting is included in property_types.
Return zulip_merge_base alongside zulip_version
in `/register`, `/event` and `/server_settings`
endpoint so that the value can be used by other
clients.
These were added at some point in the past, but were not complete, and
it makes sense to document the current feature level as and when they
become available, since clients should not use the drafts endpoints on
older feature levels.
The main reason why this is needed is because this seems to be
convention and because we can't easily test event creation without
doing this.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This commit allows to import the following from rocketchat:
* All users
* All public/private channels
* All teams and its public/private channels
* All discussion rooms as topics in their parent channel
* All the messages in all the channels
* All private conversations
* Reactions on messages (except for custom emojis)
* Mentions in messages (except @all, @here mentions)
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.
Part of #19272
We still keep refering to this model with "MutedTopic" to reduce the
diff size of this commit. The alias will be removed in the next commit.
This commit skips on renaming the `date_muted` field to something more
general. That will be done in further commits, along with the code and
API changes.
In this commit:
* We update the `UserStatus` model to accept
`AbstractReaction` as a base class so, we can get all the
fields related to store status emoji.
* We update the user status endpoint
(`users/me/status`) to accept status emoji fields.
* We update the user status event to add status emoji
fields.
Co-authored-by: Yash Rathore <33805964+YashRE42@users.noreply.github.com>
This commit adds can_add_custom_emoji
helper to check whether the user can
add custom emoji or not.
This function will be used further when
add_custom_emoji_policy will be extended
to include all COMMON_POLICY_VALUES.
This commit replaces boolean field add_emoji_by_admins_only with an
integer field add_custom_emoji_policy as we would also add full members
and moderators option for this setting in further commits.
This removes a bunch of non-functional duplicate JavaScript, HTML, and
CSS that was interfering with maintenance on the functional originals,
because it was never clear how to update the duplicates or how to
check that you’d updated the duplicates correctly.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit moves "enter_sends" setting to property_types dict.
With this change, changing enter_sends setting also sends an
event of type "update_display_settings" and thus enables us
to live-update the UI.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'invite_to_realm_policy' and others will
be removed separately.
Instead of directly changing the `POST` attribute of a request, we
utilize the `HostRequestMock` initializer to produce requests with
different post data.
The decorators require the decorated function to be a valid view
function. This changes the way the mocked view functions and requests
are implemented such that we can invoke view functions without future
type errors.
`export_realm` accepts an HttpRequest as the first argument,
while `self.client_post` conflicts with it. Though the argument is
unused in `export_realm`, we keep it to be compliant with the
view function type.
As we only return the actual decorator as-is only if `function` is
`None`, we can use `@overload` to accurately annotate the return type
for the decorator.
When calling some functions or assigning values to certain attributes,
the arguments/right operand do not match the exact type that the
functions/attributes expect, and thus we fix that by converting types
beforehand.
Of the two other logging mocks left in this file, one checks
a logging call isn't made and another makes sure errors
aren't allowed by raising an exception as a side_effect
to the logger.
Cross realm bots will soon stop being a thing. This param is responsible
for displaying "System Bot" in the user info popover - so this rename is the
right way to handle the situation.
We will likely want to rename the `cross_realm_bots` section as well,
but that is a more involved API migration.
The code didn't account for existence of SOCIAL_AUTH_SUBDOMAIN. So the
redirects would happen to endpoints on the SOCIAL_AUTH_SUBDOMAIN, which
is incorrect. The redirects should happen to the realm from which the
user came.
This fixes a batch of mypy errors of the following format:
'Item "None" of "Optional[Something]" has no attribute "abc"
Since we have already been recklessly using these attritbutes
in the tests, adding assertions beforehand is justified presuming
that they oughtn't to be None.
If a user doesn't have enable_drafts_synchronization set to True, then
don't let them access the drafts API. This will help protect us
against client bugs accidentally sending drafts to the server when the
feature is disabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This field will control whether or not a user wants to sync their
drafts between different clients. Defaults to enabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
We allow a maximum value of one week to make sure there aren't a huge
number of rows in the table for any user (this could happen if stream
notifications are enabled).
This commit also fixes a small error in the user_settings test.
We only have one query which will change database state in this function,
and we already have a lock on the process itself, so there's no need for
a transaction.
This was added in ebb4eab0f9.
These modern landing pages cover use cases previously not detailed on
our website. Technically, we had a /for/research page before, but it
wasn't finished or linked everywhere.
Removed "function-url-quotes" stylelint rule
since I need to use quotes in url to use an
svg as list bullet point. There are spacing issues
using it as an image. Also, using quotes in url
is actually the recommended way to do it otherwise
there could be issue with escaping.
There might be good reasons to have other external authentication
methods such as SAML configured, but none of them is available.
This happens, for example, when you have enabled SAML so that Zulip is
able to generate the metadata in XML format, but you haven't
configured an IdP yet. This commit makes sure that the phrase _OR_ is
only shown on the login/account page when there are actually other
authentication methods available. When they are just configured, but
not available yet, the page looks like as if no external
authentication methods are be configured.
We achieve this by deleting any_social_backend_enabled, which was very
similar to page_params.external_authentication_methods, which
correctly has one entry per configured SAML IdP.
This is necessary to break the uncollectable reference cycle created
by our ‘request_notes.saved_response = json_response(…)’, Django’s
‘response._resource_closers.append(request.close)’, and Python’s
https://bugs.python.org/issue44680.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This prevents a memory leak caused by the `SimpleLazyObject` instance of
`UserProfile` that create a reference loop with the request object
via `ZulipRequestNotes`.
This API change removes unnecessary complexity from a client that
wants to change a user's personal settings, and also saves developers
from needing to make decisions about what sort of setting something is
at the API level.
We preserve the old settings endpoints as mapping to the same function
as the new one for backwards-compatibility. We delete the
documentation for the old endpoints, though the documentation for the
merged /settings endpoint mentions how to use the old endpoints when
needed.
We migrate all backend tests to the new endpoints, except for
individual tests for each legacy endpoint to verify they still work.
Co-authored-by: sahil839 <sahilbatra839@gmail.com>
This prevents a memory leak arising from Python’s inability to collect
a reference cycle from a WeakKeyDictionary value to its key
(https://bugs.python.org/issue44680).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This makes several changes:
* Fixes a bug where the help text explaining our policies was not displayed.
* No help text was defined for many organization types.
* Copy-edits the help text somewhat.
* Offers all of the organization type options.
* Removes the 100% coverage requirement because it's annoying to test
the e.currentTarget click handler.
Most of the Markdown Preprocessors followed a common
template, and the `run` and `init` code was duplicated
multiple times for different preprocessors.
This commit adds a base class from which the preprocessors
following the pattern can inherit, and can override the
`render` and `generate_text` functions to execute the code.
Sometime in the deep past, Zulip the GET /users/me/subscriptions
endpoint started returning subscribers. We noticed this and made it
optional via the include_subscribers parameter in
1af72a2745, however, we didn't notice
that they were being returned as emails rather than user IDs.
We migrated the core /register code paths to use subscriber IDs years
ago; this change completes that for the endpoints we forgot about.
The documentation allowed this error because we apparently had no
tests for this code path that used the actual API.
We remove the "full_name" and "account_email" fields from the response
of 'PATCH /settings' endpoint. These fields were part of the response
to make sure that we tell that the parameters not present in response
were ignored.
We can remove these fields as 'ignored_parameters_unsupported' now
specifies which parameters were ignored and not supported by the
endpoint.
We add "ignored_parameters_unsupported" field to the response object
of 'PATCH /settings' endpoint. This will contain the parameters
passed to the endpoint which are not changed by the endpoint and are
ignored.
This will help in removing the other fields like "full_name" from
response which was essentially present to specify that only these
fields were updated by the endpoint and rest were ignored.
We will also change other endpoints to follow this in future.
We migrated the main method in the API bindings project to
get_subscriptions some time ago, and apparently neglected to change
the API documentation as well.
This is more robust towards reruning failed tests (which ran
partially and added some events to a queue before failing).
The tearDown code was added in 571f8b8664.
We are starting to run into situations where this data could be
quite useful for making future decisions, so it makes to store it
in the database, not just in an email.
Moving forward we are hoping to collect data on org types from our
users, so it makes sense to display the org type on the "Counts"
tab of our /activity page.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'add_emoji_by_admins_only' and others will
be removed separately.
Previously, non-admin emoji authors were allowed to
delete the emoji only if add_emoji_by_admins_only
was false. But, as add_emoji_by_admins_only setting
is for who can add emoji and not delete emojis, it
should not affect the behavior of deleting emojis
and users should always be allowed to delete the
emojis which. they added themselves
This commit adds moderators and full members options for
user_group_edit_policy by using COMMON_POLICY_TYPES.
Moderators do not require to be a member of user group in
order to edit or remove the user group if they are allowed
to do so according to user_group_edit_policy.
But full members need to be a member of user group to edit
or remove the user group.
There is no need to have a error message which specifies the
roles having permission to edit user-groups, we can simply
have error message as "Insufficient permission" as we already
show the roles having permission clearly in UI.
This concludes the HttpRequest migration to eliminate arbitrary
attributes (except private ones that are belong to django) attached
to the request object during runtime and migrated them to a
separate data structure dedicated for the purpose of adding
information (so called notes) to a HttpRequest.
This migrates some mocked Request class and mocked request achieved
with namedtuple in test_decorators and test_mirror_users to use the
refactored HostMockRequest.
Since weakref cannot be used with namedtuple, this old way of mocking a
request object should be migrated to using HostRequestMock. Only after
this change we can extract client from the request object and store it
via ZulipRequestNotes.
This includes the migration of fields that require trivial changes
to be migrated to be stored with ZulipRequestNotes.
Specifically _requestor_for_logs, _set_language, _query, error_format,
placeholder_open_graph_description, saveed_response, which were all
previously set on the HttpRequest object at some point. This migration
allows them to be typed.
We will no longer use the HttpRequest to store the rate limit data.
Using ZulipRequestNotes, we can access rate_limit and ratelimits_applied
with type hints support. We also save the process of initializing
ratelimits_applied by giving it a default value.
We create a class called ZulipRequestNotes as a new home to all the
additional attributes that we add to the Django HttpRequest object.
This allows mypy to do the typecheck and also enforces type safety.
Most of the attributes are added in the middleware, and thus it is
generally safe to assert that they are not None in a code path that
goes through the middleware. The caller is obligated to do manual
the type check otherwise.
This also resolves some cyclic dependencies that zerver.lib.request
have with zerver.lib.rate_limiter and zerver.tornado.handlers.
Previously, we stored up to 2 minutes worth of email events in memory
before processing them. So, if the server were to go down we would lose
those events.
To fix this, we store the events in the database.
This is a prep change for allowing users to set custom grace period for
email notifications, since the bug noted above will aggravate with
longer grace periods.
This will be used to store the missedmessage events received
during the waiting time for email notifications (which is currently
2 minutes, hardcoded).
The change in `test_retention` is because we've set `on_delete=CASCADE`
for the message field this table.
The new query is like so:
```
DELETE FROM "zerver_missedmessageemailentry"
WHERE "zerver_missedmessageemailentry"."message_id" IN (
1545, 1546, 1547, 1548, 1549, 1550, 1551, 1552, 1553
)
```
This reduces loose strings in the codebase, and allows us to not worry
about the exact naming (`stream_email_enabled` or `stream_emails_enabled`?)
and tense (`mentioned` or `mention`?).
Ideally this new class should have been in `lib/notification_data.py`,
which is our file for things like this. But, the next commit requires
using this data in `models.py`, and importing from `notification_data.py`
to `models.py` causes recursive imports.
The operationId is directly used in URLs of API doc pages
to find the OpenAPI data to render. However, this is dash-
separated in the URLs, and having underscore_separated IDs
in OpenAPI data doesn't allow direct comparison of the two.
This commit changes all OperationIDs from underscore_separated
to dash-separated.
Work around Fatal1ty/aioapns#15, by silencing error-level logging from
the aioapns logger. We deal with the results of failed
send_notification calls by examining the `result.description` and
handling them; the extra logging message merely clutters the Sentry
logs.
Previously, one needed to specifying all the HTTP status
codes that we want to render along with the operation,
but the primary use case just needs the responses of
all the status codes, and not just one.
This commit modifies the Markdown extension to render
all the responses of all status codes of a specified
operation in a loop.
The `# nocoverage` was unnecessary apart from for the compatibility code,
so add a test for that code and remove the `# nocoverage`.
The `message_id` -> `message_ids` conversion was done in
9869153ae8.
Previously, even non-admins had the option to override built-in
emojis in the `Settings Emoji` UI.
This commits essentially limits the functionality of overriding
custom and allows only realm administrators to
override built-in emojis with their custom emojis by adding an
authorization check in the backend.
It also adds relevant tests in `test_realm_emoji` which tests
for the cases where an admin and non admin tries to override
the built-in emoji.
Fixes#18860.
We added this function in 8e1a7cfb52
in order to make things more readable in example which hard-code user
ids. The point is to validate that the id indeed refers to the user that
the person writing the example expects, while providing information to
readers of the code so they don't have to do db queries to figure out
the user. As mentioned in the commit referred to above, this is
particularly useful when some db changes cause renumbering of user ids -
because then all these ids have to be adjusted and it's nice to know the
intended user.
Zulip identifies users by realm+delivery_email which means that the
Django changepassword command doesn't work well -
since it looks only at the .email field.
Thus we fork its code to our own change_password command.
We use subs as a common variable name for a collection of stream
data structure used in settings, in lot of modules. So this
rename clears a bunch of related shadowed variables.
This function had a confusing name, which could result in someone
using it unintentionally when they meant do_reactivate_user.
We also add docstrings for both functions.
We don't want this rate limit to affect legitimate users so it being hit
should be abnormal - thus worth logging so that we can spot if we're
rate limiting legitimate users and can know to increase the limit.
If the user is logged in, we'll stick to rate limiting by the
UserProfile. In case of requests without authentication, we'll apply the
same limits but to the IP address.
This option of specifying a different domain isn't used anywhere as of
now and we don't have a concrete way it could be used in the near
future. It's also getting in the way of how we want to do rate limiting
by IP, for which we'll want to apply a new domain 'api_by_ip'. That's
incompatible with how this decorator wants to determine the domain based
on the argument it receives when called to decorate a view function.
If in the future we want to have more granular control over API domains,
this can be refactored to be more general, but as of now it's just
imposing restrictions on how we can write the rate limiting code inside
it.
We add a new class UserBaseSettings and will be moving some of
the user settings to this class from UserProfile and UserProfile
will inherit it.
This is a prep commit for adding RealmUserDefault table which will
be used to set the realm-wide default for user settings like night
mode, etc. Adding UserBaseSettings will help us in avoiding copy
the same fields in RealmUserDefault.
We remove timezone setting from UserProfile.property_types
so that we can directly use UserProfile.property_types for
implementation of realm-default values of various user
settings.
This commits removes the redundant `compute_show_invites` function
which computes the `show_invites` page parameter in `lib/users.py`.
It is so because, commit 13399833b0 removed
the `show_invites` context variable passed in index.html.
Hence, the `show_invites` page_param key is no
longer required to compute in backend as it can be switched with
`settings_data.user_can_invite_others_to_realm()` in the frontend.
This commits also removes the `test_compute*` tests in
`test_home` that concerned with the `show_invites` page parameter
as they are no longer required.
* `stream_name`: This field is actually redundant. The email/push
notifications handlers don't use that field from the dict, and they
anyways query for the message, so we're safe in deleting this field,
even if in the future we end up needing the stream name.
* `timestamp`: This is totally unused by the email/push notification
handlers, and aren't sent to push clients either.
* `type` is used only for the push notifications handler, since only
push notifications can be revoked, so we move them to only run there.
The code to also notify for wildcard mentions was added in
0ed0bb6828.
But that showed the same text for both the cases. This commit fixes
that.
This is more of change for correctness. The mobile app currently does
not rely on this text for notifications, but constructs the text by
itself from the data in the payload.
This also fixes the "stream_push_notify" case to consistently show
a `#` before the stream name.
Running notify_server_error directly from the logging handler can lead
to database queries running in a random context. Among the many
potential problems that could cause, one actual problem is a
SynchronousOnlyOperation exception when running in an asyncio event
loop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This fixes a bug introduced in 95b46549e1
which made the worker simply log a warning about the timeout and then
continue consume()ing the event that should have also been interrupted.
The idea here is to introduce an exception which can be used to
interrupt the consume() process without triggering the regular handling
of exceptions that happens in _handle_consume_exception.
Since do_create_realm also creates general and core team streams,
we rename general to verona right after the realm is created. Mostly
because we dont really want two additional streams and this might
probably make it easy to review things.
There are puppeteer test changes because, we have a new "core team"
stream in tests as well as there is a new default notification stream
"Verona". Because of this tests in message-basics for example have
to be changed since the newly added core team affects the order in
which we navigate through the streams using arrow keys.
The extra await for selector was added in subscriptions test to make
the tests wait. Without the await the tests were passing ocassionally
and failing in some other times.
Fixes#6967
The previous logic was incorrect and was not flushing the stream from
cache after deletion.
```
stream = get_realm_stream("Verona", realm.id)
stream.delete()
get_realm_stream("Verona", realm.id)
```
In the above example, the last line of code would have returned
the stream from cache instead of throwing a Stream.DoesNotExist
error. This is fixed in the commit.
I have verified that this commit indeed fix the issue by verifying
that calling get_realm_stream again after deleting the stream
results in Stream.DoesNotExist error.
This commit migrates the `navbar.html` Django template
to handlebars by creating a new file as `navbar.hbs`
within `/static/templates` which is then rendered
using `ui_init` module.
As a part of migration, we also remove the `search_pills_enabled`
and `embedded` parameters from the context attribute as they
are no longer needed now.
Fixes part of #18792.
Throwing an exception is excessive in case of this worker, as it's
expected for it to time out sometimes if the urls take too long to
process.
With a test added by tabbott.
This allows specific queue workers to override the defaut behavior and
implement their own response to the timer expiring. We will want to use
this for embed_links queue at least.
The list of possible values of all settings was re-defined in
do_test_realm_update_api and we can instead use the list defined
in models.py which is used to validate values in views/realm.py.
There is no problem in order of values as we always initialize the
first value of the list.
This has also added some more values to test for a couple of
settings as a result.
We change the test_change_realm_default_language to test only invalid
value and rename it to test_invalid_realm_default_language, because
we already test whether the value is changed correctly or not in
do_test_realm_update_api.
This commit removes test_change_bot_creation_policy which is used
to test changing bot_creation_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_email_address_visibility which is used
to test changing email_address_visibility using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value is
also tested in test_invalid_integer_attribute_values.
This commit removes test_change_invite_to_stream_policy which is used
to test changing invite_to_stream_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_invite_to_realm_policy which is used
to test changing invite_to_realm_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid value
is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_move_messages_between_streams_policy
which is used to test changing move_messages_between_streams_policy
using 'PATCH /realm' endpoint as we already do this in
do_test_realm_update_api and invalid value is also tested in
test_invalid_integer_attribute_values.
This commit removes test_user_group_edit_policy which is used
to test changing user_group_edit_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
This commit removes test_private_message_policy which is used to
test changing private_message_policy using 'PATCH /realm' endpoint
as we already do this in do_test_realm_update_api and invalid
value is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_wildcard_mention_policy which is
used to test changing wildcard_mention_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
This commit removes test_change_stream_creation_policy which is
used to test changing create_stream_policy using 'PATCH /realm'
endpoint as we already do this in do_test_realm_update_api and
invalid value is also tested in test_invalid_integer_attribute_values.
Currently, the "Home" link at the top takes one to the doc root,
i.e., /help or /api. This is a little misleading since "Home"
seems to be more synonymous with the Zulip homepage.
This commit adds a proper backlink to the top logo that takes you to
the homepage and renames "Home" to be more specific. The text after
"|" will now take you to the doc root instead (/help or /api). Note
that this allows us to link the /help and /api pages from the
homepage while ensuring that backlinks allow the visitor to get back
to the homepage.
Currently, there is no provision of rendering
additional imports automatically, and those examples
were hardcoded in the templates.
This commit adds an openapi parameter
to store the additional imports required for the endpoint.
Further, it changes code to replace `import zulip`
with the modified imports.
The reason for this bug is because of different striping
processes in the backend and frontend, i.e The frontend
checks if the message's `raw_content` has changed to
decide if the `content` of the message should be sent in
the request to the backend, or not. So, it removes the
leading new line ('\n') from the message `raw_content`
when checking it, which is causing the "Error saving edit:
You don't have permission to edit this message" error.
This commit fixes it by removing the leading new line
when cleaning message content.
The bug was explained by @punchagan and its solution
by @timabbott.
This change allow check_webhook to raise an error when a message is
sent and vice versa. This is useful when one payload is not expecting
any output messages.
In addition to event filtering, we add support for registering supported
events for a webhook integration using the webhook_view decorator.
The event types are stored in the view function directly as a function
attribute, and can be later accessed via the module path and the view
function name are given (which is already specified the integrations.py)
Note that the WebhookTestCase doesn't know the name of the view function
and the module of the webhook. WEBHOOK_DIR_NAME needs to be overridden
if we want exceptions to raised when one of our test functions triggered
a unspecified event, but this practice is not enforced.
all_event_type does not need to be given even if event filters are used
in the webhook. But if a list of event types is given, it will be possible
for us to include it in the documentation while ensuring that all the
tested events are included (but not vice versa at the current stage, as
we yet not required all the events included in the list to be tested)
This guarantees that we can always access the list of all the tested
events of a webhook. This feature will be later plumbed to marcos to
display all event types dynamically in doc.md.
This commit migrates the `right_sidebar.html` Django template
to handlebars by creating a new file as `right_sidebar.hbs`
which is then rendered using `ui_init` module.
It also removes the tests in `test_home` due to the template
migration, since these elements aren't rendered on the backend
anymore.
We also remove `test_compute_show_invites_and_add_streams*`.
Fixes part of #18792.
This commit migrates the `left_sidebar.html` Django template
to handlebars by creating a new file as `left_sidebar.hbs`
which is then rendered using `ui_init` module.
These are the minor changes introduced by virtue of template
migration -
- The `compute_show_invites_and_add_streams` function now
only concerns with the invite_to_realm_policy.
- Renamed the `compute_show_invites_and_add_streams` function
to `compute_show_invites` due to the above change.
- Fixes relevant `test_home.py` tests due to the above
changes.
Fixes part of #18792.
We will later use this data to include text like:
`<sender> mentioned @<user_group>` instead of the current
`<sender> mentioned you` when someone mentions a user group
the current user is a part of in email/push notification.
Part of #13080.
We will use this later to display which user group was mentioned
in push and email notifications.
`mentioned_user_group_ids` is kept as a List (not Set) to ensure proper
test coverage of the function, since it depends on the order of iteration,
and we cannot change the order of iteration for a set (which we'll need
to do for proper testing).
Part of #13080.
Commit 81d7dd1fda broke this nearly
eight years ago, so probably nobody cares except the ever-watchful eye
of mypy.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The absence of __init__.py was preventing mypy from following any of
the zerver.openapi imports. These errors were being silenced by
ignore_missing_imports.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
An organization with at most 5 users that is behind on payments isn't
worth spending time on investigating the situation.
For larger organizations, we likely want somewhat different logic that
at least does not void invoices.
get_public_upload_root_url and construct_public_upload_url_base were
both doing basically the same thing in the same. We deduplicate this,
making them share the same code, using the approach from
get_public_upload_root_url of using urljoin.
Using a format string is not a great idea, as it doesn't handle the case
of the URL already having parts that will be interpreted as format
string metacharacters. On the downside, this approach negatively affects
performance:
```
...: s = time.time()
...: for i in range(0, 250):
...: r = u.get_public_upload_url("foo")
...: print(time.time()-s)
0.020366191864013672
```
up from 0.001 before this change.
As we have changed the tab selector above from "Settings" to "Personal
settings", we can simply change "Your bots" to "Bots" as "Bots" is
clear enough given the personal settings context.
We also need to update the API documentation for bots accordingly.
Some response descriptions weren't migrated into
OpenAPI data, which was done for other API pages.
This commit migrates the response descriptions for
error-handling page.
This commit fixes the documentation of settings as we have
replaced "Your account" section into two new sections -
"Profile" and "Account & privacy".
This commit also fixes a comment in the test for settings
documentation in test_middleware.py.
JsonableError has two major benefits over json_error:
* It can be raised from anywhere in the codebase, rather than
being a return value, which is much more convenient for refactoring,
as one doesn't potentially need to change error handling style when
extracting a bit of view code to a function.
* It is guaranteed to contain the `code` property, which is helpful
for API consistency.
Various stragglers are not updated because JsonableError requires
subclassing in order to specify custom data or HTTP status codes.
The previous string was bold, potentially confusing, and doesn't
explain clearly what's happening. We replace this with a string that's
more or less copied from what we do in email notifications with the
similar setting enabled.
When a user has disabled message content in mobile push notifications,
we send a fixed string (currently "REDACTED") as the content of the
notification. Previously, this string was not tagged for translation;
we fix that here.
Additionally, because mobile push notifications are generated in a
queue worker, they do not have the user's language set by the Django
middleware. Our email notifications solve that problem using
`override_language`; we do the same here.
We choose to do override_language in get_message_payload_apns and
get_message_payload_gcm, rather than the caller, in order to be
consistent with tests.
Tested end-to-end by tabbott by setting a translation for "REDACTED"
manually in German.
Fixes#18713.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'allow_message_deleting' and others will
be removed separately.
This helper will be used to check whether
the user is allowed to edit user groups or
not. Currently it is not used, but will
be used in next commit where we will
refactor the user_group_edit_policy to use
COMMON_POLICY_TYPES.
We modify check_send_webhook_message to make it accept three new
parameters: only_events and exclude_events that are retrieved using REQ,
and complete_event_type, which is passed by the incoming webhook view
that is filtered according to the former two parameters.
Part of #18525.
Since FIXTURE_DIR_NAME is the name of the folder that contains the view
and tests modules of the webhook and another folder called "fixtures" that
store the fixtures, it is more appropriate to call it WEBHOOK_DIR_NAME,
especially when we want to refer to the view module using this variable.
* Move content on moving topics between streams to a dedicated
article. We advertise it as "move content" to hint that one can move
messages or split topics, and link to it.
* This deletes change-the-topic-of-a-message, because the same content
is already covered in rename-a-topic.
* This commit mostly just moves content between articles. Most of that
content was redundant with the first few paragraphs of the surviving
"rename a topic" article. The former "This is useful for" se ntence
was adapted to the remaining article.
* This commit also adds a redirect for the removed article, and
updates related links.
This commit fixes the bug of always showing
day-mode realm logo when color scheme display
setting is set to automatic but the OS setting
is dark theme. This is because we cannot check
the OS setting on backend and we need to set
the logo url accordingly in frontend only.
So, we remove the logo url computation from
backend completely and instead compute it in
the frontend only.
Fixes#18778.
This commits ports the `search_operators.html` file from
./templates to handlebars, essentially creating a new file
as `search_operators.hbs` within /static/templates which is
then rendered using info_overlays.js.
As part of this migration, we rewrote the way internationalization was
done, since the previous implementation incorrectly did not support
languages with a different word order than English.
We also not consistently use periods at the end of the descriptions.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Fixes#18504.
This commits ports the `keyboard_shortcuts.html` file from
using the Django template to handlebars, essentially creating
a new file as `keyboard_shortcuts.hbs` within /static/templates
which is then rendered using info_overlays.js.
Fixes part of #18792.
This command was part of the complex migration to introduce the
`unread_msgs` data structure as the source of truth for unreads.
Effectively, it's a migration to remove anomalies that we ran several
times before turning it into the final 0104_fix_unreads.py migration.
Fixes part of #18898.
This command is part of a statsd infrastructure that we stopped
supporting years ago. Its only purpose for some time has been to
provide sample code for how the restart script might trigger a
notification to a graphing system, which doesn't justify maintaining
it.
Fixes part of #18898.
This command was part of early prototyping of the digests feature, and
in particular its purpose is better served via the organization-level
setting to control digest emails for the organization.
Fixes part of #18898.
This command was written to allow generating multiuse invite links
before the "Invite a user" UI supported them. It no longer has a
purpose and can be safely deleted.
Fixes part of #18898.
This command predates there being a normal UI for inviting users to
Zulip. It no longer has a role for which it's a better way to do
things. (Especially with upcoming API documentation for the endpoint).
Fixes part of #18898.
This command was introduced in 2013 via
6d6c3364dc as part of implementing
marking messages as read in a separate process for performance reasons.
We fixed the performance issues and removed that pipeline years ago,
but forgot to delete this.
Fixes part of #18898.
This adds a new class called MessageRenderingResult to contain the
additional properties we added to the Message object (like alert_words)
as well as the rendered content to ensure typesafe reference. No
behavioral change is made except changes in typing.
This is a preparatory change for adding django-stubs to the backend.
Related: #18777
Add a new `verify_signup` helper function, which currently implements
enough functionality to be used by `test_signup_existing_email`.
This is the first step towards #7564.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
test_create_stream_policy_setting (in class StreamAdminTest) and
test_user_settings_for_creating_streams (in class SubscriptionAPITest)
test essentially the same thing.
So, remove one of them.
Removing test_create_stream_policy_setting makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can create streams.
test_invite_to_stream_by_invite_period_threshold (in class StreamAdminTest)
and test_user_settings_for_subscribing_other_users
(in class SubscriptionAPITest) test essentially the same thing.
So, remove one of them.
Removing test_invite_to_stream_by_invite_period_threshold makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can invite other users.
This was used to test can_create_stream property of a guest user.
There are better ways to test it, which are already implemented in
test_can_create_streams.
This PR adds a basic .md template that is followed by lot of /api
pages. Since we have recently done the migration work to ensure that
our REST API documentation pages for individual endpoints are almost
all identical files following a common pattern, we can now get the
payoff of deleting them all in favor of a shared template.
This removes 2000 lines of somewhat finicky configuration from the
codebase, and thus should save significant effort when documenting new
API endpoints in the future.
The markdown files for endpoints or other pages which deviate from the
standard template remain, and the docs are instead generated from
those files using the existing system.
The returned values of get_path function would be
expanded soon, and defining a dataclass would make
the code cleaner for returning and using the fields.
As a part of goal of moving towards a common template,
the hardcoded python tabs need to be removed to ensure
that endpoints which don't have python examples can be
covered by the common template as well.
This commit also modifies the markdown extension for python
examples to render empty string in case the examples don't
exist, which would allow it to be called whether the endpoint
has python examples or not.
Currently, the message that no parameters are accepted by
the endpoint is displayed if there are no parameters in
OpenAPI data, but it is possible that information is
encoded in x-parameter-description (example in upload-file
endpoint), and we want to display that information rather
than the message.
Added an if condition to check the same.
We use the "does not accept any parameters" language in the common
template that we'll be migrating to shortly, so we remove this
variance (And adjust its test).
This removes some complexity from the event_queue module.
To avoid code duplication, we reduce the `is_notifiable` methods to
internally just call the `trigger` methods and check their return value.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
AUTHENTICATION LINE variable needs to be set after each
line executed, but in the current code, it wasn't being
set in endpoints whose files were removed in favour of
the pages being generated directly from OpenAPI data.
Moved the block to set AUTHENTICATION LINE in the loop
which executes each command, which fixes the bug.
While importing a realm, the stream dictionaries in data['zerver_stream']
already contains the field named `rendered_description`, which is set to
`""`. This lead the code to assume that the stream rendered_description
was already set, due to which, it was not setting the rendered_description
field for any stream.
This is a prep commit for adding realm-level default for various
user settings. We add the language, in which the invite email will
be sent, to the dict added to queue itself to avoid making queries
in a loop when sending multiple emails from queue.
We also handle the case for old events in the queue.
We removed the use of email_body field in 47fcb27e39, but was
still passed in events from do_resend_user_invite_email and
in tests. So this commit removes the email_body field from
these places.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all parameters
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
As a goal of a common template, there
is a need for a tool to auto-generate
general description for all responses
directly from OpenAPI data.
This description is to be stored in
x-response-description field, and this
commit adds a markdown extesion to process the same.
We already have this data in the `flags` for each user, so no need to
send this set/list in the event dictionary.
The `flags` in the event dict represent the after-message-update state,
so we can't avoid sending `prior_mention_user_ids`.
This is much faster than calling generate_presigned_url each time.
```
In [3]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.0010945796966552734
```
Fixes#18915
This was very slow, causing performance issues. After investigating,
generate_presigned_url is the cheap part of this, but the
session.client() call is expensive - so that's what we should cache.
Before the change:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
6.408717393875122
```
After:
```
In [4]: t = time.time()
...: for i in range(250):
...: x = u.get_public_upload_url("foo")
...: print(time.time()-t)
0.48990607261657715
```
This is not good enough to avoid doing something ugly like replacing
generate_presigned_url with some manual URL manipulation, but it's a
helpful structure that we may find useful with further refactoring.
Previously, it was possible for an unusual series of topic-edit
actions to result in Notification Bot reporting that a topic was
marked as resolved that had already been marked as resolved, etc.
A buggy client might send a message_edit request to change the topic
field, sending the current topic as the new value. Previously, we
would treat that as a normal request to edit the topic; now we act as
though the API request had not requested a topic change. In the
common case that only the topic was in the edit request, this now
results in an error that should help client implementations identify
their bug.
This fixes a bad interaction with the "unresolve topic" logic, which
assumed that upstream logic had verified that the topic was actually
changing.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'default_twenty_four_hour_time' and
'default_language', others will be removed separately.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit simply changes the indentation of the
block and replaces the command being tested
with each element of the commands array. This
was split for an easier review.
Now, the markdown extension of curl_examples generates
all examples of all possible configurations with
their descriptions, and so we need to separate
executable curl commands from the rest of the raw
HTML.
This commit adds a commands variable to store
all the curl commands in HTML using regex.
Now, include and exclude configuration
are fetched from openapi data, and only
one type can be encoded for every example.
This removes the need for the assertion to
test if both include and exclude are present
since at a time, only one can be present.
This commit adds support for using the
x-curl-examples-parameters parameter in OpenAPI
data to fetch curl examples configuration. This
also contains any descriptions necessary for each
example, and directly generates all possible
curl examples directly.
A follow-up commit is needed to modify the templates
accordingly.
As a goal of moving towards a common template,
we need to fetch curl examples' configurations
directly from openapi data instead of having them
hardcoded in templates. This commit introduces
x-curl-examples-parameters to store the configs
for the same.
* Have the `get_active_presence_idle_user_ids` function look at all the
user data, not just `private_message` and `mentioned`.
* Fix a couple of incorrect `missedmessage_hook` tests, which did not
catch the earlier behaviour.
* Add some comments to the tests for this function for clarity.
* Add a helper to create `UserMessageNotificationsData` objects from the
user ID lists. This will later help us deduplicate code in the event_queue
logic.
This fixes a bug which earlier existed, that if a user turned on stream
notifications, and received a message in that stream which did not mention
them, they wouldn't be in the `presence_idle_users` list, and hence would
never get notifications for that message.
Note that, after this commit, users might still not get notifications in
the above scenarios in some cases, because the downstream logic in the
notification queue consumers sometimes erroneously skips sending
notifications for stream messages.
Since `flags` here could be iterated through multiple times
(to check for push/email notifiability), we use `Collection`.
Inspired by 871e73ab8f.
The other change here in the `event_queue` code is prep for using
the `UserMessageNotificationsData` class there.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Before this commit, we used to pre-calculate flags for user data and send
it to Tornado, like so:
```
{
"id": 10,
"flags": ["mentioned"],
"mentioned": true,
"online_push_enabled": false,
"stream_push_notify": false,
"stream_email_notify": false,
"wildcard_mention_notify": false,
"sender_is_muted": false,
}
```
This has the benefit of simplifying the logic in the event_queue code a bit.
However, because we sent such an object for each user receiving the event,
the string keys (like "stream_email_notify") get duplicated in the JSON
blob that is sent to Tornado.
For 1000 users, this data may take up upto ~190KB of space, which can
cause performance degradation in large organisations.
Hence, as an alternative, we send just the list of user_ids fitting
each notification criteria, and then calculate the flags in Tornado.
This brings down the space to ~60KB for 1000 users.
This commit reverts parts of following commits:
- 2179275
- 40cd6b5
We will in the future, add helpers to create `UserMessageNotificationsData`
objects from these lists, so as to avoid code duplication.
This is separate from the next commit for ease of testing.
To verify that the compatibility code works correctly, all message send
and event_queue tests from our test suite should pass on just this commit.
We now encode resolved topics with just:
U+2714 HEAVY CHECK MARK, SPACE
Previously, the encoding was unintentionally this:
U+2714 HEAVY CHECK MARK, U+FE0F VARIATION SELECTOR-16, SPACE
The language_list_dbl_col parameter in the page_params
is used by only the web client frontend. The value is
calculated in the backend and then passed as a page_param
which is unnecessary considering that the whole process
is beneficial for the front_end only. Hence move the entire
calculation code to the frontend.
Fixes part of #18673.
default_language_name was a part of page_params which is actually
redundant considering that we already have language_list and
default_language available to frontend which can be used to
get the default_language_name and hence prevents the backend
from sending an additional parameter.
Fixes part of #18673.
This is a follow-up for 98f8d94b25.
For cases when url_format_string is like https://example.com/%%(foo)s/%(bar)s
group_match_regex should only detect `bar` as the intended
parameter and not `foo`.