django-scim2 doesn't order the rows when fetching them in reponse to a
query using the filter syntax. We ensure that ORDER BY id is always
appended to the SQL queries.
This commit switches the BigBlueButton integration
to use SHA256 instead of SHA1 as BigBlueButton supports
it and scalelite does now, too.
Fixes#19966.
This commit changes web_public_streams_enabled to return False if
realm.enable_spectator_access is False. This is added so that
creating web-public streams is not allowed if enable_spectator_access
is False.
Special characters, including `\r`, `\n`, and more esoteric codepoints
like non-characters, can negatively affect rendering and UI behaviour.
Check for, and prevent making new messages with, characters in the
Unicode categories of `Cc` (control characters), `Cs`, (surrogates),
and `Cn` (unassigned, non-characters).
Fixes#20128.
This commit replaces "dark mode" and "light mode" with "dark theme"
and "light theme" in the message returned and shown in a little
popup in the UI, when color scheme settings are changed through
slash commands.
Since spectators can't access personal profile settings and
can't view profile for other users. Hence, we don't send realm
custom profile field data and user's profile data to spectators.
Fixes#20301.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
We restrict access of messages from web public streams if
anonymous login is disabled via `enable_spectator_access`.
Display of `Anonymous login` button is now controlled by
the value of `enable_spectator_access`.
Admins can toggle `enable_spectator_access` via org settings in UI.
The user id is a very useful piece of information that the mobile
client should have access to - instead of only getting the email. This
makes it much simpler to impleent clients that might be robust to
changes in email address.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
This commit ensures that all markdown fixtures have unique
test names by rewriting the names of some of them and adding
a test in `test_markdown.py`.
Earlier this was over-writing the value for same keys in
`load_markdown_tests` in `test_markdown.py`.
This resolves the issues reported in #20108, major chunk of which were
due to the incomplete support for importing the livechat streams/messages
in the tool. So, it's best not to import any livechat streams/messages for
now until a complete support for importing the same is developed.
The decorator form is clearer by being more explicit; additionally,
the api_by_user rate-limit only currently used in one place, and makes
it difficult to test per-user rate-limits that are more specific.
Both `create_realm_by_ip` and `find_account_by_ip` send emails to
arbitrary email addresses, and as such can be used to spam users.
Lump their IP rate limits into the same bucket; most legitimate users
will likely not be using both of these endpoints at similar times.
The rate is set at 5 in 30 minutes, the more quickly-restrictive of
the two previous rates.
The existing test did no verify that the rate limit only applied to
127.0.0.1, and that other IPs were unaffected. For safety, add an
explicit test of this.
The only use case of rate_limit_rule which does not clear the
RateLimitedIPAddr history is test_hit_ratelimits_as_remote_server,
which is not made any worse by clearing out the IP history for a
non-existent `api_by_remote_server` domain.
Since `prefers_web_public_view` key in session is only
relevant to users without an account, this key should no longer
be present in the user's session object.
Fixes#19907
For export realm following changes have been made:
- `./manage.py export --upload` would delete `.tar.gz` and unpacked dir
- `./manage.py export` would only delete `unpacked dir`
Besides, we have removed `--delete-after-upload` as we have set it as
the default.
Fixes#20081
If realm is web_public, spectators can now view avatar of other
users.
There is a special exception we had to introduce in rest model to
allow `/avatar` type of urls for `anonymous` access, because they
don't have the /api/v1 prefix.
Fixes#19838.
This commit adds functionality to import messages from the
Discussions having direct channels as their parent. As we don't
have topics in the PMs, the messages are imported in interleaved
form in the imported direct channels/PMs.
This was completely unsupported earlier and would have resulted in
an error.
This commit updates the error message returned when the maximum
invite limit for the day. We update the error returned by API to
only mention that the limit is reached and add the suggestion
to use multi-use link or contact support in the message shown
in webapp.
We always use delivery_email to generate gravatar_url, but in
test_admin_api_hide_emails we were passing email to get_gravatar_url
and matched with the avatar_url field of the fetched user object.
The tests were passing because the email_address_is_realm_public
was using old realm object and thus email field was incorrectly
set to delivery_email even when email_address_visibility was set
to EMAIL_ADDRESS_VISIBILITY_ADMINS.
This commit fixes the test to pass delivery_email to get_gravatar_url.
Not proxying these requests through camo is a security concern.
Furthermore, on the desktop client, any embed image which is hosted on
a server with an expired or otherwise invalid certificate will trigger
a blocking modal window with no clear source and a confusing error
message; see zulip/zulip-desktop#1119.
Rewrite all `message_embed_image` URLs through camo, if it is enabled.
Supporting URL percent-encoded bytes is possible using `%%20`, but this
is not necessarily very understandable to end-users, even those that
understand percent encoding.
Allow `%20` in linkifier URL format strings, and transform them into
`%%20` in the pattern just before they are applied in markdown
translation. Care must be taken here, such that already-escaped `%`s
are not escaped an extra time.
We do this before rendering, and not before storage, as
a simplification; the JS-side linkifier at present only understands
`%(foo)s` and thus needs no changes, and to avoid an un-escaping pass
before showing in the admin UI.
User-supplied custom realm filter has had some sort of regex-based
validation of the format URL since their introduction in
d7e1e4a2c0 -- and this has always been
in addition to the URLValidator. The URLValidator is the one which
does the security-relevant work of validating that the schema is
reasonable, and that the overall shape of the URL is well-formed. The
regex has served primarily to arbitrary limit the characters that can
appear in the URL, in the mistaken name of safety.
Adjust the regex, such that its only purpose is to verify that the
usages of `%` characters in the URL are reasonable, and leave the URL
validation to the URLValidator, which can do a far better job. This
includes broadening the support to include `%%` as an escape
character; this is likely such a niche case as to be unnecessary, but
costs little.
Fixes#16013.
Removes the `/day` and `/night` options from the typeahead menu while
still allowing the commands to be used. Typing `/day` and `/night`
will now suggest `/light` and `/dark`, respectively. Also changes the
`Dark mode` and `Light mode` popups that appear after using the
corresponding command.
Fixes#18318.
This makes logging more consistent between FCM and APNs codepaths, and
makes clear which user-ids are for local users, and which are opaque
integers namespaced from some remote zulip server.
Being able to determine how many distinct users are getting push
notifications per remote host is useful, as is the distribution of
device counts. This parallels the log line in
handle_push_notification for push notifications from local realms,
handled via the event queue.
We should use more selective query for UserGroupMembership
objects in tests for checking adding and removing members.
This is done by checking the membership counts for the
particular user group only.
This will help in keeping the tests more understandable
after we add members to the role-based system groups,
since that would create a lot of membership objects.
We make the UserGroup queries in user group creation and
deletion tests more selective by fitering the user groups
which belong to the realm and not the one included in
lear realm, etc.
This will help us to keep the tests more understandable
when the counts of UserGroup increases due to addition of
system groups. There is no need to consider system groups
of other realms in these tests.
It is confusing to have the plan type constants not be namespaced
by the thing they represent. We already have a namespacing
convention in place for constants, so we should use it for
Realm.plan_type as well.
* Remove unnecessary json_validator for full_name parameter.
* Update frontend to pass the right parameter.
* Update documentation and note the change.
Fixes#18409.
`rendered_content` in historical messages may be empty; examining the
history of them may thus require diff'ing two empty strings, which
itself produces an empty string.
Use `lxml.html.fragment_fromstring` to be able to successfully parse
these, rather than 500.
Part of #19559.
As detailed in the comments, the default behavior is undesirable for us
because we can't really predict all possibilities of exceptions that may
be raised - and thus putting str(e) in the http response is potentially
insecure as it may leak some unexpected sensitive information that was
in the exception.
As a hypothetical example - KeyError resulting from some buggy
some_dict[secret_string] call would leak information. Though of course
we aim to never write code like that.
We pass allow_realm_admin as True to access_stream_by_id for
`GET users/{user_id}/subscriptions{stream_id}` endpoint
because we want to allow non-subscribed admins to get
subscription status in private streams.
Fixes#19077.
This is a prep commit for new permissions model in
which a user group would be able to have a subgroup.
This commit renames get_memberships_of_users to
get_direct_memberships_of_users to specify that
the function is used only to fetch the direct
memberships and not memberships of subgroups of
the direct group.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This is a prep commit for new permissions model in which a user
group would be able to have a subgroup.
This commit renames get_user_groups to get_direct_user_groups
to specify that the function is used only to fetch the direct
groups that user is part of and not subgroups of the direct
group.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
From 430c5cb, in `fetch_initial_state_data`,
we only include legacy settings in the top level of
`state` and the newer ones are stored in `state['user_settings']`.
That should've had a corresponding change in apply_event().
Also, fixed a test related to this logic.
From 430c5cb, we do not send events for settings added after
introduction of `user_settings` event.
This test wasn't considering that and started failing on
adding new user_settings.
This commit removes _test_user_settings_for_adding_streams
and its callers for testing public and private streams
because it uses excessive mocking and we also test the same
thing in _test_user_settings_for_creating_streams without
mocking, so this test doesn't add anything.
For users who are not logged in and for those who don't have
'prefers_web_public_view' set in session, we redirect them
to the default login page where they can choose to login
as spectator or authenticated user.
This commit adds can_create_web_public_streams helper
in models.py which will be used to validate whether
user is allowed to create a web-public stream or not.
This commit also adds the checks for Realm.POLICY_OWNERS_ONLY
in check_has_permission_policies.
This commit adds create_web_public_stream_policy
field to Realm table which controls the roles that
can create web-public streams and by default its
value is set to POLICY_OWNERS_ONLY.
This commit enforces invite_only argument to be named
in _test_user_settings_for_creating_streams. This will
help in improving readability especially when we will
add is_web_public argument in further commits.
We send three events when changing delivery email of a user - one
for updating the delivery_email field of user, one for avatar url
change, and one for changing email field if email_address_visibility
is set to 'EMAIL_ADDRESS_VISIBILITY_EVERYONE'.
There is already a test for delivery_email and avatar_url event with
the visibility setting set to 'EMAIL_ADDRESS_VISIBILITY_ADMINS_ONLY',
but no test for verifying the email update event sent when email
address is public, so this commit adds a test for checking the schema
of event for updating email field.
When email_address_visibility is changed and either the old value
or the updated value is EMAIL_ADDRESS_VISIBILITY_EVERYONE then
email field of all users is updated and we also send the corresponding
event to clients. But apply_event code did not update the data on
receiving the event, so this commit fixes the code to correctly
handle the event in apply_event.
(We also use this event when just changing a user's email address).
This commit also adds the tests and openapi schema for the event.
We use the lists defined in models.py like Realm.COMMON_POLICY_TYPES,
Realm.COMMON_MESSAGE_POLICY_TYPES, etc. in do_set_realm_property_test
instead of using defining list there (eg - [4, 3, 2, 1]). We do the
same thing in do_set_realm_property_test in test_realm.py.
We skip email_address_visibility values in this commit because it
requires some change in openapi schema as well.
Since the calls to the translation function `_()` are made outside
of the `send_message_moved_breadcrumbs` function, these strings are
translated outside of the `with override_language` block, leading to
translated strings even when we don't intend them to be translated.
We now use gettext_lazy with appropriate testing to avoid this.
Zulip attempts to validate that the regular expressions that admins
enter for linkifiers are well-formatted, and only contain a specific
subset of regex grammar. The process of checking these
properties (via a regex!) can cause denial-of-service via
backtracking.
Furthermore, this validation itself does not prevent the creation of
linkifiers which themselves cause denial-of-service when they are
executed. As the validator accepts literally anything inside of a
`(?P<word>...)` block, any quadratic backtracking expression can be
hidden therein.
Switch user-provided linkifier patterns to be matched in the Markdown
processor by the `re2` library, which is guaranteed constant-time.
This somewhat limits the possible features of the regular
expression (notably, look-head and -behind, and back-references);
however, these features had never been advertised as working in the
context of linkifiers.
A migration removes any existing linkifiers which would not function
under re2, after printing them for posterity during the upgrade; they
are unlikely to be common, and are impossible to fix automatically.
The denial-of-service in the linkifier validator was discovered by
@erik-krogh and @yoff, as GHSL-2021-118.
This removes a false-positive ReDoS, since the input is always
checked-in code. It also incidentally refactors to make the regexes
be more explicit about the values they expect, and removes unnecessary
capturing groups.
It removes an optional parenthesized status code for fixtures,
unnecessary since 981e4f8946, as well as
optional key-value language options, unnecessary since
a2be9a0e2d.
Thank you to @erik-krogh and @yoff for bringing this to our attention.
This fixes the issue where 'None' would appear in the rendered
html in case of a missing tab display_name. Now,
'test-help-documentation' will fail in case of any tab display_name
being missing.
In case of a tab_section with no tabs, currently a single tab with
the name 'null_tab' gets added. Added the display name 'None' for
'null_tab', to keep in line with the existing behaviour.
Fixes#19822
This makes our onboarding guide for education organizations much
simpler, since new organizations will start with these settings
correctly configured.
Fixes#19682
Users wanted a feature where they could specify
which users can create public streams and which users can
create private streams.
This splits stream creation code into two parts,
public and private stream creation.
Fixes#17009.
This commit replaces 'allow_message_deleting' boolean setting
with an integer setting 'delete_own_message_policy'. We have a
separate dropdown now for deciding which user-roles can delete
messages sent by themselves and the time-limit setting droddown
is different.
This new setting has two options - everyone and admins only. Other
options including moderators will be added further.
We also remove the "Never" option from the original time-limit
dropdown, as admins are always allowed to delete message. This
never option resembled the case of only admins being allowed to
delete but this state is now resembled by setting the dropdown
to "admins only" and we also disable the time-limit dropdown in
this case as admins are allowed to delete irrespective of limit.
Note, this setting is only for deleting messages sent by the
deleting user themselves, and only admins are allowed to delete
messages sent by others as before.
We make zero invalid value for message_content_delete_limit_seconds and
for handling the case of "Allow to delete message any time", the API-level
value of message_content_delete_limit_seconds is "anytime" and "None"
as the DB-level value. We also use these values for message retention
setting, so it helps maintain consistency.
This commit does not remove the 'enable_login_emails' field from
RealmUserDefault table but it is just not used and cannot be
changed from UI or API similar to 'enable_marketing_emails' setting.
Requests to the root subdomain weren't getting request_notes.realm set
even if a realm exists on the root subdomain - which is actually a
common scenario, because simply having one organization, on the root
subdomain, is the simplest and common way for self-hosted deployments.
This reverts commit cd93d0967f.
This check_or is redundant with check_union; it gives a misleading
error message for the non-matching case; and it has no type safety.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Apparently, our slack compatible outgoing webhook format didn't
exactly match Slack, especially in the types used for values. Fix
this by using a much more consistent format, where we preserve their
pattern of prefixing IDs with letters.
This fixes a bug where Zulip's team_id could be the empty string,
which tripped up using GitLab's slash commands with Zulip.
Fixes#19588.
This commit removes the existing default_twenty_four_hour_time field in
Realm table which was used to set the twenty_four_hour_time setting of
new user on joining and instead we now use the twenty_four_hour_time
field of RealmUserDefault table for the same.
With some tweaks by tabbott to clarify the documentation.
None of the existing custom profile field types have the value as an
integer like declared in many places - nor is it a string like currently
decalred in types.py. The correct type is Union[str, List[int]]. Rather
than tracking this in so many places throughout the codebase, we add a
new ProfileDataElementValue type and insert it where appropriate.
Send update event to client after a stream is made web public.
This has been documented in the API documentation since feature level
73; previously the value was always false.
We allow clients to make existing streams web public via the API.
This feature is still disabled via settings in production
environments, because we may have additional policy rules or UI
warnings we wish to add to this sort of conversion.
User can now create web public stream via the /subscribe API.
So, when a web public stream present in the API request does not
exist, it will be created now by specifying the is_web_public
parameter. The parameter would have been ignored without this
commit.
The new error message is more clear about why, "User cannot create
stream with this settings." was bad English, and in any case removing
an unnecessary string is always an improvement for translators.
This new setting both serves as a guard to allow us to merge API
support for web public streams to main before we're ready for this
feature to be available on Zulip Cloud, and also long term will
protect self-hosted servers from accidentally enabling web-public
streams (which could be a scary possibility for the administrators of
a corporate Zulip server).
This is a follow-up to #19388.
We will in the future allow patch requests to change the visibility
of an existing topic, so `last_updated` is better name for this field.
This commit does not affect the API or events in any way, but only the
database.
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
The test now uses submit_reg_form_for_user, meaning a blank
full_name is posted to /accounts/register/ rather than the
parameter being excluded.
Fixes part of #7564
I had to pass stop_after_reg_form=True, as the call to get_user in
verify_signup fails. I am not sure whether this is the expected
behavior. Also this causes the test to use submit_reg_form_for_user,
meaning a blank password is posted to /accounts/register/ rather than
no password.
Fixes part of #7564
get_user_by_delivery_email should be used, given that the email variable
is the realm email address that the account is being created with, not
the .email field which can be a dummy address based on org settings.
Currently we used to redirect to /new when the user click on buy
standard from the root domain. Instead we redirect to /upgrade page.
The /upgrade page redirect would ask user to enter the subdomain
of their organization and would then redirect them to /upgrade
page of their organization.
This parallels fe25517295, but for mobile notifications. It also
adds a test, which verifies that such content does not crash either
mobile or email notifications.
fe25517295 adjusted the email_notifications codepath to use
`lxml.html.fragment_fromstring` method when parsing
`rendered_content`, but left the tests using a helper which called
`fromstring`.
Switching the tests to match the code as run reveals a bug -- using
`drop_tree` on all `message_inline_image` classes now _does_ remove
all of a top-level image-URL-only message. Previously, such messages
were "safe" from the block that calls `drop_tree` only by dint of
`drop_tree` being a silent no-op for the root element. When parsed
using `fragment_fromstring`, they are no longer the root, and as such
an empty message results.
Reorder relative_to_full_url to check for only one `message_inline_image`
within the top `<div>`, and only run the `drop_tree` path in the
alternate case. Tests must be adjusted for their output now including
one more layer of `<div>`.
The previous commit introduced logging of attempts for username+password
backends. For completeness, we should log, in the same format,
successful attempts via social auth backends.
Now, when we add a custom animated emoji to the realm
we also save a still image of it (1st frame of the gif). So
we can avoid showing an animated emoji every time.
This extends the invite api endpoints to handle an extra
argument, expiration duration, which states the number of
days before the invitation link expires.
For prereg users, expiration info is attached to event
object to pass it to invite queue processor in order to
create and send confirmation link.
In case of multiuse invites, confirmation links are
created directly inside do_create_multiuse_invite_link(),
For filtering valid user invites, expiration info stored in
Confirmation object is used, which is accessed by a prereg
user using reverse generic relations.
Fixes#16359.
The API for changing the batching period was added in
5db4fe8652.
This is a follow up to that commit. We also update the timestamps for
existing scheduled email notifications entries so that the effect of
changing the setting is immediate.
Part of #15280
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
We do not allow mentioning system user groups for now
because this can lead to circumventing the wildcard
mention restrictions. It will be enabled once we add
a setting to control that.
This is implemented by just ignoring it as one of the
mentioned user group even if the message content
inlcudes the mention syntax for it and the message
is sent normally.
We still keep the for_mention parameter for accessing
user group while sending email and push notifications
as mentioning system user groups will be allowed in
future.
This commit also removes the test for email notifications
for system user groups as we are not allowing mentioning
them.
This commit is only for backend change as we already
exclude the system groups from mention typeaheads and
other UI.
This commit modifies the copy_user_settings code such that instead
of source user profile, we can have two types of sources - a user
profile and RealmUserDefault table of realm and then set the
settings from RealmUserDefault only is there is no user profile
as a source.
We also rename copy_user_settings to copy_default_settings for
clarity.
This commit adds do_set_realm_user_default_setting which
will be used to change the realm-level defaults of settings
for new users.
We also add a new event type "realm_user_settings_defaults"
for these settings and a "realm_user_settings_default" object
in '/register' response containing all the realm-level default
settings.
Because we create all realms with do_create_user (including in the
test suite), we just need to change that function, add a migration for
existing realms, and ensure the data import code path correctly
creates these objects.
Note that the import code path will create a RealmUserDefault row with
default values if it is not present in the import data, which is
important for importing data from other tools like Slack.
We set the enable_marketing_emails setting after copying user
settings to override the value selected in registration form.
This change is also necessary because enable_marketing_emails
field is present in RealmUserDefault to avoid copying code
but we do not use this value actually and instead we want
the setting to be set according to the value in registration
form.
We set this setting only for non-bot users since we generally
do not set any settings for bots.
We still used notification_setting_types in copy_user_settings
function of create_user.py and in a test in test_event_system.py.
It is not required to do so since we have added all settings in
property_types already and we loop over property_types at both
these places which includes all settings.
We already test all the notification settinsg in
test_toggling_boolean_display_settings (which is
now renamed to test_toggling_boolean_user_settings)
as all settings are now moved to property_types and
we are merging other parts also to consider all the
settings under one category.
This commit adds a separate test for invalid values of user settings
and remove the existing code for it in test_change_user_setting.
This change will enable us to merge the tests for notification
settings to this because email_notification_batching_period_settings
has different invalid values than other integer values and we do the
same for realm settings also.
This commit adds `demo_organization_scheduled_deletion_date` to
the `realm` section of the `/register` response so that it is
available to clients when enabled.
This is a part of #19523.
This fixes a regression where one could end up deactivating all owners
of a realm when trying to synchronize LDAP with the `is_realm_admin`
flag configured in `AUTH_LDAP_USER_FLAGS_BY_GROUP`.
With tweaks by tabbott to add is_moderator as well.
Fixes#18677.
Since 84742a0, all settings are sent in the `user_settings` dictionary
which were previously sent inline with other fields in /register
response.
In order to simplify the process of adding new personal settings, we
want to transition to a world where new settings only need to consider
the `property_types` object, and code that needs to reference the
legacy behavior interacts with an object with `legacy` in its name.
This way, contributors working on new settings don't need to think
about the legacy code paths at all.
See https://chat.zulip.org/#narrow/stream/378-api-design/topic/user.20settings.20response.20in.20.2Fregister
to understand this better.
The commit:
1. Adds the new field as nullable.
2. Adds code that'll create new Confirmation with the field set
correctly.
3. For verifying validity of Confirmation object this still uses the old
logic in get_object_from_key() to keep things functioning until we
backfill the old objects in the next step.
Thus this commit is deployable. Next we'll have a commit to run a
backfill migration.
An integer or no argument is supposed to be passed.
These weren't caught by mypy because booleans are integers in python,
see https://github.com/python/mypy/issues/1757
aioapns already has a retry loop. By default it retries forever on
ConnectionError and ConnectionClosed, so our own retry loop would
never be reached. Remove our retry loop, and configure aioapns to
retry APNS_MAX_RETRIES times on ConnectionError like the previous
version did. It still retries forever on ConnectionClosed; that’s not
configurable but probably fine.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This utilizes the generic `BaseNotes` we added for multipurpose
patching. With this migration as an example, we can further support
more types of notes to replace the monkey-patching approach we have used
throughout the codebase for type safety.
This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
The default is kept as no retries. Since retries with exponential
backoff are a good thing to make easy, the int form defaults to
setting a backoff_factor.
Unfortunately, urllib3 retry backoff does not implement jitter.
Switching this to use the `backoff` library[1] rather than urllib3's
native Retry is left as future extension.
[1] https://pypi.org/project/backoff/
This adds the X-Smokescreen-Role header to proxy connections, to track
usage from various codepaths, and enforces a timeout. Timeouts were
kept consistent with their previous values, or set to 5s if they had
none previously.
The transforms called from `build_message_payload` use
`lxml.html.fromstring` to parse (and stringify, and re-parse) the HTML
generated by Markdown. However, this function fails if it is passed
an empty document. "empty" is broader than just the empty string; it
also includes any document made entirely out of control characters,
spaces, unpaired surrogates, U+FFFE, or U+FFFF, and so forth. These
documents would fail to parse, and raise a ParserError.
Using `lxml.html.fragment_fromstring` handles these cases, but does by
wrapping the contents in a <div> every time it is called. As such,
replacing each `fromstring` with `fragment_fromstring` would nest
another layer of `<div>`.
Instead of each of the helper functions re-parsing, modifying, and
stringifying the HTML, parse it once with `fragment_fromstring` and
pass around the parsed document to each helper, which modifies it
in-place. This adds one outer `<div>`, requiring minor changes to
tests and the prepend-sender functions.
The modification to add the sender is left using BeautifulSoup, as
that sort of transform is much less readable, and more fiddly, in raw
lxml.
Partial fix for #19559.
maybe_send_batched_emails handles batches of emails from different
users at once; as it processes each user's batch, it enqueues messages
onto the `email_senders` queue. If `handle_missedmessage_emails`
raises an exception when processing a single user's email, no events
are marked as handled -- including those that were already handled and
enqueued onto `email_senders`. This results in an increasing number
of users being sent repeated emails about the same missed messages.
Catch and log any exceptions when handling an individual user's
events. This guarantees forward progress, and that notifications are
sent at-most-once, not at-least-once.
We do not allow any user to edit the system user groups (including
renaming, deleting, adding or removing members, etc.) from the
API. These user groups will change only by the code when a new
user is added or role of a user is changed.
This is implemented by rejecting access_user_group_by_id always
except the case when it is use to get the user group for sending
email and push notifications, as we would need to send notifications
to the mentioned user group.
Tuples cannot be deserialized from JSON.
While we do use these validators for other things, like event
dictionaries, we have migrated the API away from using those. The
last use was removed in 4f3d5f2d87
Signed-off-by: Anders Kaseorg <anders@zulip.com>