This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
We move the emojiset_choices method from UserProfile class to
UserBaseSettings class because emojiset_choices exists in
UserBaseSettings class and this would be used for realm-level
settings as well along with existing user-level settings.
We rename the user group in the example for 'GET /user_groups'
with is_system_group=True, to be 'Moderators' as is_system_group
will be set to True for role-based user groups only.
The default is kept as no retries. Since retries with exponential
backoff are a good thing to make easy, the int form defaults to
setting a backoff_factor.
Unfortunately, urllib3 retry backoff does not implement jitter.
Switching this to use the `backoff` library[1] rather than urllib3's
native Retry is left as future extension.
[1] https://pypi.org/project/backoff/
This adds the X-Smokescreen-Role header to proxy connections, to track
usage from various codepaths, and enforces a timeout. Timeouts were
kept consistent with their previous values, or set to 5s if they had
none previously.
This commits removes some unnecessary checks for `self.md.zulip_message`,
which were put there historically, as earlier we used to add the additional
properties like mentions_user_ids, alert_words, etc. to Message dict
only. These were later moved to MessageRenderingResult class in commit
75cea329b but the checks weren't removed.
This is important because while rendering the messages imported from
other chat tools (like Rocket.Chat), the Message dict is not passed to
the markdown, due to which the checks for `self.md.zerver_message` fails
and hence, things like user mentions, stream/topic mentions are not
rendered in the imported messages properly.
The `user_activity_interval` worker calls:
```python3
last = UserActivityInterval.objects.filter(user_profile=user_profile).order_by("-end")[0]
`````
Which results in a query like:
```sql
SELECT "zerver_useractivityinterval"."id", "zerver_useractivityinterval"."user_profile_id", "zerver_useractivityinterval"."start", "zerver_useractivityinterval"."end" FROM "zerver_useractivityinterval" WHERE "zerver_useractivityinterval"."user_profile_id" = 12345 ORDER BY "zerver_useractivityinterval"."end" DESC LIMIT 1
```
For users which have at least one matching row, this results in a
query plan like:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=0.078..0.078 rows=1 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=0.077..0.078 rows=1 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: 98
Planning Time: 0.059 ms
Execution Time: 0.088 ms
```
But for users that have just been created, with no matching rows, this
is considerably more expensive:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=10798.146..10798.146 rows=0 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=10798.145..10798.145 rows=0 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: (count of every single row in the table, redacted)
Planning Time: 0.053 ms
Execution Time: 10798.158 ms
```
Regular vacuuming can force the use of the index on `user_profile_id`
as long as there are few enough users, which is fast -- however, at
some point, the query planner decides that is insufficiently specific,
always chooses the effective-whole-table-scan.
Add an index on `(user_profile_id, end)`, which is expected to be
sufficiently specific that it is used even with large numbers of user
profiles.
Ref #19250.
The transforms called from `build_message_payload` use
`lxml.html.fromstring` to parse (and stringify, and re-parse) the HTML
generated by Markdown. However, this function fails if it is passed
an empty document. "empty" is broader than just the empty string; it
also includes any document made entirely out of control characters,
spaces, unpaired surrogates, U+FFFE, or U+FFFF, and so forth. These
documents would fail to parse, and raise a ParserError.
Using `lxml.html.fragment_fromstring` handles these cases, but does by
wrapping the contents in a <div> every time it is called. As such,
replacing each `fromstring` with `fragment_fromstring` would nest
another layer of `<div>`.
Instead of each of the helper functions re-parsing, modifying, and
stringifying the HTML, parse it once with `fragment_fromstring` and
pass around the parsed document to each helper, which modifies it
in-place. This adds one outer `<div>`, requiring minor changes to
tests and the prepend-sender functions.
The modification to add the sender is left using BeautifulSoup, as
that sort of transform is much less readable, and more fiddly, in raw
lxml.
Partial fix for #19559.
This is a roundabout way to appease a semgrep complaint about
‘error_msg = error_msg % (string_id,)’ while also improving the code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
9ac55a8cf6 introduced support for
batch updates to stories. However, that commit didn't skip label
removals, as we already do in non-batch story payloads. This led
to an exception for batch story update payloads where labels were
removed but none were added.
maybe_send_batched_emails handles batches of emails from different
users at once; as it processes each user's batch, it enqueues messages
onto the `email_senders` queue. If `handle_missedmessage_emails`
raises an exception when processing a single user's email, no events
are marked as handled -- including those that were already handled and
enqueued onto `email_senders`. This results in an increasing number
of users being sent repeated emails about the same missed messages.
Catch and log any exceptions when handling an individual user's
events. This guarantees forward progress, and that notifications are
sent at-most-once, not at-least-once.
This commit indicates that the realm_message_retention_days field can have
a special value, similar to its stream counterpart, and also explains how
the special value changed over different server versions.
With an extension from tabbott to double-enter the changelog entry.
Related discussion: https://chat.zulip.org/#narrow/stream/378-api-design/topic/realm_message_retention_days
The ability to use multiple ports has been removed a long time ago.
And the "optional" note in the help message is in fact incorrect
since `addrport` being `None` is not supported.
We do not allow any user to edit the system user groups (including
renaming, deleting, adding or removing members, etc.) from the
API. These user groups will change only by the code when a new
user is added or role of a user is changed.
This is implemented by rejecting access_user_group_by_id always
except the case when it is use to get the user group for sending
email and push notifications, as we would need to send notifications
to the mentioned user group.
We make the description parameter in create_user_group as keyword-only
to improve readability. We would also keep the is_system_group
parameter which will be added in future keyword-only.
Tuples cannot be deserialized from JSON.
While we do use these validators for other things, like event
dictionaries, we have migrated the API away from using those. The
last use was removed in 4f3d5f2d87
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These changes are all independent of each other; I just didn’t feel
like making dozens of commits for them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The auth attempt rate limit is quite low (on purpose), so this can be a
common scenario where a user asks their admin to reset the limit instead
of waiting. We should provide a tool for administrators to handle such
requests without fiddling around with code in manage.py shell.
Calling `email.save()` is only needed if we altered `email.address`;
it is unnecessary if we called `email.users.add(...)` which will have
done its own INSERT.
This fixes two bugs: the most obvious is that there is a race where a
ScheduledEmail object could be observed in the window between creation
and when users are added; this is a momentary instance when the object
has no users, but one that will resolve itself.
The more subtle is that .save() will, if no records were found to be
updated, _re-create_ the object as it exists in memory, using an
INSERT[1]. Thus, there is a race with `deliver_scheduled_emails`
between when the users are added, and when `email.save()` runs:
1. Web request creates ScheduledEmail object
2. Web request creates ScheduledEmailUsers object
3. deliver_scheduled_emails locks the former, preventing updates.
4. deliver_scheduled_emails deletes both objects, commits, releasing lock
5. Web request calls `email.save()`; UPDATE finds no rows, so it
re-creates the ScheduledEmail object.
6. Future deliver_scheduled_emails runs find a ScheduledEmail with no
attending ScheduledEmailUsers objects
Wrapping the logical creation of both of these in a single transaction
avoids both of these races.
[1] https://docs.djangoproject.com/en/3.2/ref/models/instances/#how-django-knows-to-update-vs-insert
Only clear_scheduled_emails previously took a lock on the users before
removing them; make deliver_scheduled_emails do so as well, by using
prefetch_related to ensure that the table appears in the SELECT. This
is not necessary for correctness, since all accesses of
ScheduledEmailUser first access the ScheduledEmail and lock it; it is
merely for consistency.
Since SELECT ... FOR UPDATE takes an UPDATE lock on all tables
mentioned in the SELECT, merely doing the prefetch is sufficient to
lock both tables; no `on=(...)` is needed to `select_for_update`.
This also does not address the pre-existing potential deadlock from
these two use cases, where both try to lock the same ScheduledEmail
rows in opposite orders.
No codepath except tests passes in more than one user_profile -- and
doing so is what makes the deduplication necessary.
Simplify the API by making it only take one user_profile id.
This fixes a bug where email notifications were sent for wildcard
mentions even if the `enable_offline_email_notifications` setting was
turned off.
This was because the `notification_data` class incorrectly considered
`wildcard_mentions_notify` as an indeoendent setting, instead of a wrapper
around `enable_offline_email_notifications` and `enable_offline_push_notifications`.
Also add a test for this case.
While the STREAM_LINK_REGEX and STREAM_TOPIC_LINK_REGEX
identifies the stream and topic mentions in the content
correctly (tested by printing out the matches), the
stream/topic mentions are still not linked to the
corresponding streams/topics for imported messages, as
a `zulip_message` instance is required for linking these
mentions to actual streams/topics (see `StreamPattern`
class in `markdown/__init__.py`) which is not provided
while processing the markdown for imported messages.
This commit updates both the stream-level and realm-level message
retention setting to use 'unlimited' instead of 'forever' to set
message retention setting to "retain messages forever".
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'message_retention_days' and others will
be removed separately.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.
Previously, the output would make it look like we sent an actual email
to the first user in the dry_run output, which is very confusing.
The `dry_run` code path already prints all the accounts that would
have been emailed at the end, so there's no reason to have this line
before the dry_run check.
Additionally, we move after the `get_connection` check because
failures at that stage shouldn't result in logging an attempt to send
an email.
This way we can stop reading as soon as we get to the body. Also,
send an Accept header, check that the request was actually successful,
use lxml.etree.iterparse instead of a broken hand-rolled state
machine, and support XHTML, all for negative 28 lines of code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While it should be an invariant that message.rendered_content is never
None for a row saved to the database, it is possible for that
invariant to be violated, likely including due to bugs in previous
versions of data import/export tools.
While it'd be ideal for such messages to be rendered to fix the
invariant, it doesn't make sense for this has_link migration to crash
because of such a corrupted row, so we apply the similar policy we
already have for rendered_content="".
We rework the landing page for companies in the same way we've
recently revamped the landing pages for other use cases.
This implementation unfortunately duplicates a lot of content from
/plans; we should clean that up at some point.
This reverts commit 1965584eec.
This syntax has a bad interaction with table syntax and needs to be
rethought.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Slack bot emails generated by us can be duplicate for two bots.
If such a case occur, append a counter to the email to make it
unique.
For maintaining the counter of duplicate emails and the final
email assigned to each bot, a class based approach is used with
static variables and static (class) methods. This keeps all the
data related to slack bot emails at the same place and easily
accessible from anywhere inside the module (without defining any
class object and passing it around).
Fixes: #16793
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
This is more efficient than get_lexer_by_name, since we don’t need to
instantiate the class just to get its name.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The BlockingChannel annotations in TornadoQueueClient were flat-out
wrong. BlockingChannel and Channel have no common base classes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is effectively a step closer to what was proposed in
https://github.com/zulip/zulip/pull/18678#discussion_r644490540 when
this code was written in #18678.
If the Customer object has neither of a Stripe id, nor any historical
plans, then there's no real billing association contained in the
existence of the Customer object, and it's safe to delete.
This fixes a regression in de04f0ad67.
We'll do a proper test in a follow-up commit; this is a quick fix to
make sure master works.
The emails will bounce, but it'll create all sorts of infrastructure
headaches.
We added "user_settings" object containing all the user settings in
previous commit. This commit modifies the code to send the existing
setting fields in the top-level object only if user_settings_object
client_capabilities field is False.
This commit adds "user_settings_object" field to
client_capabilities which will be used to determine
if the client needs 'update_display_settings' and
'update_global_notifications' event.
We send a event with type 'user_settings' on updating user's display
and notification settings.
The old event types - 'update_global_notifications' and
'update_display_settings', are still supported for backwards
compatibility.
We do not require separate tests for checking events when changing
"enable_drafts_synchronization" as we already do this in the display
settings test because this setting is included in property_types.
Return zulip_merge_base alongside zulip_version
in `/register`, `/event` and `/server_settings`
endpoint so that the value can be used by other
clients.
These were added at some point in the past, but were not complete, and
it makes sense to document the current feature level as and when they
become available, since clients should not use the drafts endpoints on
older feature levels.
The main reason why this is needed is because this seems to be
convention and because we can't easily test event creation without
doing this.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This commit allows to import the following from rocketchat:
* All users
* All public/private channels
* All teams and its public/private channels
* All discussion rooms as topics in their parent channel
* All the messages in all the channels
* All private conversations
* Reactions on messages (except for custom emojis)
* Mentions in messages (except @all, @here mentions)
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.
Part of #19272
We still keep refering to this model with "MutedTopic" to reduce the
diff size of this commit. The alias will be removed in the next commit.
This commit skips on renaming the `date_muted` field to something more
general. That will be done in further commits, along with the code and
API changes.
In this commit:
* We update the `UserStatus` model to accept
`AbstractReaction` as a base class so, we can get all the
fields related to store status emoji.
* We update the user status endpoint
(`users/me/status`) to accept status emoji fields.
* We update the user status event to add status emoji
fields.
Co-authored-by: Yash Rathore <33805964+YashRE42@users.noreply.github.com>
This commit adds can_add_custom_emoji
helper to check whether the user can
add custom emoji or not.
This function will be used further when
add_custom_emoji_policy will be extended
to include all COMMON_POLICY_VALUES.
This commit replaces boolean field add_emoji_by_admins_only with an
integer field add_custom_emoji_policy as we would also add full members
and moderators option for this setting in further commits.
This removes a bunch of non-functional duplicate JavaScript, HTML, and
CSS that was interfering with maintenance on the functional originals,
because it was never clear how to update the duplicates or how to
check that you’d updated the duplicates correctly.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit moves "enter_sends" setting to property_types dict.
With this change, changing enter_sends setting also sends an
event of type "update_display_settings" and thus enables us
to live-update the UI.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'invite_to_realm_policy' and others will
be removed separately.
Instead of directly changing the `POST` attribute of a request, we
utilize the `HostRequestMock` initializer to produce requests with
different post data.
The decorators require the decorated function to be a valid view
function. This changes the way the mocked view functions and requests
are implemented such that we can invoke view functions without future
type errors.
`export_realm` accepts an HttpRequest as the first argument,
while `self.client_post` conflicts with it. Though the argument is
unused in `export_realm`, we keep it to be compliant with the
view function type.
As we only return the actual decorator as-is only if `function` is
`None`, we can use `@overload` to accurately annotate the return type
for the decorator.
When calling some functions or assigning values to certain attributes,
the arguments/right operand do not match the exact type that the
functions/attributes expect, and thus we fix that by converting types
beforehand.
Of the two other logging mocks left in this file, one checks
a logging call isn't made and another makes sure errors
aren't allowed by raising an exception as a side_effect
to the logger.
Cross realm bots will soon stop being a thing. This param is responsible
for displaying "System Bot" in the user info popover - so this rename is the
right way to handle the situation.
We will likely want to rename the `cross_realm_bots` section as well,
but that is a more involved API migration.
The code didn't account for existence of SOCIAL_AUTH_SUBDOMAIN. So the
redirects would happen to endpoints on the SOCIAL_AUTH_SUBDOMAIN, which
is incorrect. The redirects should happen to the realm from which the
user came.
This fixes a batch of mypy errors of the following format:
'Item "None" of "Optional[Something]" has no attribute "abc"
Since we have already been recklessly using these attritbutes
in the tests, adding assertions beforehand is justified presuming
that they oughtn't to be None.
If a user doesn't have enable_drafts_synchronization set to True, then
don't let them access the drafts API. This will help protect us
against client bugs accidentally sending drafts to the server when the
feature is disabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This field will control whether or not a user wants to sync their
drafts between different clients. Defaults to enabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
We allow a maximum value of one week to make sure there aren't a huge
number of rows in the table for any user (this could happen if stream
notifications are enabled).
This commit also fixes a small error in the user_settings test.
We only have one query which will change database state in this function,
and we already have a lock on the process itself, so there's no need for
a transaction.
This was added in ebb4eab0f9.
These modern landing pages cover use cases previously not detailed on
our website. Technically, we had a /for/research page before, but it
wasn't finished or linked everywhere.
Removed "function-url-quotes" stylelint rule
since I need to use quotes in url to use an
svg as list bullet point. There are spacing issues
using it as an image. Also, using quotes in url
is actually the recommended way to do it otherwise
there could be issue with escaping.
There might be good reasons to have other external authentication
methods such as SAML configured, but none of them is available.
This happens, for example, when you have enabled SAML so that Zulip is
able to generate the metadata in XML format, but you haven't
configured an IdP yet. This commit makes sure that the phrase _OR_ is
only shown on the login/account page when there are actually other
authentication methods available. When they are just configured, but
not available yet, the page looks like as if no external
authentication methods are be configured.
We achieve this by deleting any_social_backend_enabled, which was very
similar to page_params.external_authentication_methods, which
correctly has one entry per configured SAML IdP.
This is necessary to break the uncollectable reference cycle created
by our ‘request_notes.saved_response = json_response(…)’, Django’s
‘response._resource_closers.append(request.close)’, and Python’s
https://bugs.python.org/issue44680.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This prevents a memory leak caused by the `SimpleLazyObject` instance of
`UserProfile` that create a reference loop with the request object
via `ZulipRequestNotes`.
This API change removes unnecessary complexity from a client that
wants to change a user's personal settings, and also saves developers
from needing to make decisions about what sort of setting something is
at the API level.
We preserve the old settings endpoints as mapping to the same function
as the new one for backwards-compatibility. We delete the
documentation for the old endpoints, though the documentation for the
merged /settings endpoint mentions how to use the old endpoints when
needed.
We migrate all backend tests to the new endpoints, except for
individual tests for each legacy endpoint to verify they still work.
Co-authored-by: sahil839 <sahilbatra839@gmail.com>
This prevents a memory leak arising from Python’s inability to collect
a reference cycle from a WeakKeyDictionary value to its key
(https://bugs.python.org/issue44680).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This makes several changes:
* Fixes a bug where the help text explaining our policies was not displayed.
* No help text was defined for many organization types.
* Copy-edits the help text somewhat.
* Offers all of the organization type options.
* Removes the 100% coverage requirement because it's annoying to test
the e.currentTarget click handler.
Most of the Markdown Preprocessors followed a common
template, and the `run` and `init` code was duplicated
multiple times for different preprocessors.
This commit adds a base class from which the preprocessors
following the pattern can inherit, and can override the
`render` and `generate_text` functions to execute the code.
Sometime in the deep past, Zulip the GET /users/me/subscriptions
endpoint started returning subscribers. We noticed this and made it
optional via the include_subscribers parameter in
1af72a2745, however, we didn't notice
that they were being returned as emails rather than user IDs.
We migrated the core /register code paths to use subscriber IDs years
ago; this change completes that for the endpoints we forgot about.
The documentation allowed this error because we apparently had no
tests for this code path that used the actual API.
We remove the "full_name" and "account_email" fields from the response
of 'PATCH /settings' endpoint. These fields were part of the response
to make sure that we tell that the parameters not present in response
were ignored.
We can remove these fields as 'ignored_parameters_unsupported' now
specifies which parameters were ignored and not supported by the
endpoint.
We add "ignored_parameters_unsupported" field to the response object
of 'PATCH /settings' endpoint. This will contain the parameters
passed to the endpoint which are not changed by the endpoint and are
ignored.
This will help in removing the other fields like "full_name" from
response which was essentially present to specify that only these
fields were updated by the endpoint and rest were ignored.
We will also change other endpoints to follow this in future.
We migrated the main method in the API bindings project to
get_subscriptions some time ago, and apparently neglected to change
the API documentation as well.
This is more robust towards reruning failed tests (which ran
partially and added some events to a queue before failing).
The tearDown code was added in 571f8b8664.
We are starting to run into situations where this data could be
quite useful for making future decisions, so it makes to store it
in the database, not just in an email.
Moving forward we are hoping to collect data on org types from our
users, so it makes sense to display the org type on the "Counts"
tab of our /activity page.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'add_emoji_by_admins_only' and others will
be removed separately.
Previously, non-admin emoji authors were allowed to
delete the emoji only if add_emoji_by_admins_only
was false. But, as add_emoji_by_admins_only setting
is for who can add emoji and not delete emojis, it
should not affect the behavior of deleting emojis
and users should always be allowed to delete the
emojis which. they added themselves
This commit adds moderators and full members options for
user_group_edit_policy by using COMMON_POLICY_TYPES.
Moderators do not require to be a member of user group in
order to edit or remove the user group if they are allowed
to do so according to user_group_edit_policy.
But full members need to be a member of user group to edit
or remove the user group.
There is no need to have a error message which specifies the
roles having permission to edit user-groups, we can simply
have error message as "Insufficient permission" as we already
show the roles having permission clearly in UI.
This concludes the HttpRequest migration to eliminate arbitrary
attributes (except private ones that are belong to django) attached
to the request object during runtime and migrated them to a
separate data structure dedicated for the purpose of adding
information (so called notes) to a HttpRequest.
This migrates some mocked Request class and mocked request achieved
with namedtuple in test_decorators and test_mirror_users to use the
refactored HostMockRequest.
Since weakref cannot be used with namedtuple, this old way of mocking a
request object should be migrated to using HostRequestMock. Only after
this change we can extract client from the request object and store it
via ZulipRequestNotes.
This includes the migration of fields that require trivial changes
to be migrated to be stored with ZulipRequestNotes.
Specifically _requestor_for_logs, _set_language, _query, error_format,
placeholder_open_graph_description, saveed_response, which were all
previously set on the HttpRequest object at some point. This migration
allows them to be typed.
We will no longer use the HttpRequest to store the rate limit data.
Using ZulipRequestNotes, we can access rate_limit and ratelimits_applied
with type hints support. We also save the process of initializing
ratelimits_applied by giving it a default value.
We create a class called ZulipRequestNotes as a new home to all the
additional attributes that we add to the Django HttpRequest object.
This allows mypy to do the typecheck and also enforces type safety.
Most of the attributes are added in the middleware, and thus it is
generally safe to assert that they are not None in a code path that
goes through the middleware. The caller is obligated to do manual
the type check otherwise.
This also resolves some cyclic dependencies that zerver.lib.request
have with zerver.lib.rate_limiter and zerver.tornado.handlers.
Previously, we stored up to 2 minutes worth of email events in memory
before processing them. So, if the server were to go down we would lose
those events.
To fix this, we store the events in the database.
This is a prep change for allowing users to set custom grace period for
email notifications, since the bug noted above will aggravate with
longer grace periods.
This will be used to store the missedmessage events received
during the waiting time for email notifications (which is currently
2 minutes, hardcoded).
The change in `test_retention` is because we've set `on_delete=CASCADE`
for the message field this table.
The new query is like so:
```
DELETE FROM "zerver_missedmessageemailentry"
WHERE "zerver_missedmessageemailentry"."message_id" IN (
1545, 1546, 1547, 1548, 1549, 1550, 1551, 1552, 1553
)
```
This reduces loose strings in the codebase, and allows us to not worry
about the exact naming (`stream_email_enabled` or `stream_emails_enabled`?)
and tense (`mentioned` or `mention`?).
Ideally this new class should have been in `lib/notification_data.py`,
which is our file for things like this. But, the next commit requires
using this data in `models.py`, and importing from `notification_data.py`
to `models.py` causes recursive imports.
The operationId is directly used in URLs of API doc pages
to find the OpenAPI data to render. However, this is dash-
separated in the URLs, and having underscore_separated IDs
in OpenAPI data doesn't allow direct comparison of the two.
This commit changes all OperationIDs from underscore_separated
to dash-separated.
Work around Fatal1ty/aioapns#15, by silencing error-level logging from
the aioapns logger. We deal with the results of failed
send_notification calls by examining the `result.description` and
handling them; the extra logging message merely clutters the Sentry
logs.
Previously, one needed to specifying all the HTTP status
codes that we want to render along with the operation,
but the primary use case just needs the responses of
all the status codes, and not just one.
This commit modifies the Markdown extension to render
all the responses of all status codes of a specified
operation in a loop.
The `# nocoverage` was unnecessary apart from for the compatibility code,
so add a test for that code and remove the `# nocoverage`.
The `message_id` -> `message_ids` conversion was done in
9869153ae8.
Previously, even non-admins had the option to override built-in
emojis in the `Settings Emoji` UI.
This commits essentially limits the functionality of overriding
custom and allows only realm administrators to
override built-in emojis with their custom emojis by adding an
authorization check in the backend.
It also adds relevant tests in `test_realm_emoji` which tests
for the cases where an admin and non admin tries to override
the built-in emoji.
Fixes#18860.
We added this function in 8e1a7cfb52
in order to make things more readable in example which hard-code user
ids. The point is to validate that the id indeed refers to the user that
the person writing the example expects, while providing information to
readers of the code so they don't have to do db queries to figure out
the user. As mentioned in the commit referred to above, this is
particularly useful when some db changes cause renumbering of user ids -
because then all these ids have to be adjusted and it's nice to know the
intended user.
Zulip identifies users by realm+delivery_email which means that the
Django changepassword command doesn't work well -
since it looks only at the .email field.
Thus we fork its code to our own change_password command.
We use subs as a common variable name for a collection of stream
data structure used in settings, in lot of modules. So this
rename clears a bunch of related shadowed variables.
This function had a confusing name, which could result in someone
using it unintentionally when they meant do_reactivate_user.
We also add docstrings for both functions.
We don't want this rate limit to affect legitimate users so it being hit
should be abnormal - thus worth logging so that we can spot if we're
rate limiting legitimate users and can know to increase the limit.
If the user is logged in, we'll stick to rate limiting by the
UserProfile. In case of requests without authentication, we'll apply the
same limits but to the IP address.
This option of specifying a different domain isn't used anywhere as of
now and we don't have a concrete way it could be used in the near
future. It's also getting in the way of how we want to do rate limiting
by IP, for which we'll want to apply a new domain 'api_by_ip'. That's
incompatible with how this decorator wants to determine the domain based
on the argument it receives when called to decorate a view function.
If in the future we want to have more granular control over API domains,
this can be refactored to be more general, but as of now it's just
imposing restrictions on how we can write the rate limiting code inside
it.
We add a new class UserBaseSettings and will be moving some of
the user settings to this class from UserProfile and UserProfile
will inherit it.
This is a prep commit for adding RealmUserDefault table which will
be used to set the realm-wide default for user settings like night
mode, etc. Adding UserBaseSettings will help us in avoiding copy
the same fields in RealmUserDefault.
We remove timezone setting from UserProfile.property_types
so that we can directly use UserProfile.property_types for
implementation of realm-default values of various user
settings.
This commits removes the redundant `compute_show_invites` function
which computes the `show_invites` page parameter in `lib/users.py`.
It is so because, commit 13399833b0 removed
the `show_invites` context variable passed in index.html.
Hence, the `show_invites` page_param key is no
longer required to compute in backend as it can be switched with
`settings_data.user_can_invite_others_to_realm()` in the frontend.
This commits also removes the `test_compute*` tests in
`test_home` that concerned with the `show_invites` page parameter
as they are no longer required.
* `stream_name`: This field is actually redundant. The email/push
notifications handlers don't use that field from the dict, and they
anyways query for the message, so we're safe in deleting this field,
even if in the future we end up needing the stream name.
* `timestamp`: This is totally unused by the email/push notification
handlers, and aren't sent to push clients either.
* `type` is used only for the push notifications handler, since only
push notifications can be revoked, so we move them to only run there.
The code to also notify for wildcard mentions was added in
0ed0bb6828.
But that showed the same text for both the cases. This commit fixes
that.
This is more of change for correctness. The mobile app currently does
not rely on this text for notifications, but constructs the text by
itself from the data in the payload.
This also fixes the "stream_push_notify" case to consistently show
a `#` before the stream name.
Running notify_server_error directly from the logging handler can lead
to database queries running in a random context. Among the many
potential problems that could cause, one actual problem is a
SynchronousOnlyOperation exception when running in an asyncio event
loop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This fixes a bug introduced in 95b46549e1
which made the worker simply log a warning about the timeout and then
continue consume()ing the event that should have also been interrupted.
The idea here is to introduce an exception which can be used to
interrupt the consume() process without triggering the regular handling
of exceptions that happens in _handle_consume_exception.
Since do_create_realm also creates general and core team streams,
we rename general to verona right after the realm is created. Mostly
because we dont really want two additional streams and this might
probably make it easy to review things.
There are puppeteer test changes because, we have a new "core team"
stream in tests as well as there is a new default notification stream
"Verona". Because of this tests in message-basics for example have
to be changed since the newly added core team affects the order in
which we navigate through the streams using arrow keys.
The extra await for selector was added in subscriptions test to make
the tests wait. Without the await the tests were passing ocassionally
and failing in some other times.
Fixes#6967
The previous logic was incorrect and was not flushing the stream from
cache after deletion.
```
stream = get_realm_stream("Verona", realm.id)
stream.delete()
get_realm_stream("Verona", realm.id)
```
In the above example, the last line of code would have returned
the stream from cache instead of throwing a Stream.DoesNotExist
error. This is fixed in the commit.
I have verified that this commit indeed fix the issue by verifying
that calling get_realm_stream again after deleting the stream
results in Stream.DoesNotExist error.
This commit migrates the `navbar.html` Django template
to handlebars by creating a new file as `navbar.hbs`
within `/static/templates` which is then rendered
using `ui_init` module.
As a part of migration, we also remove the `search_pills_enabled`
and `embedded` parameters from the context attribute as they
are no longer needed now.
Fixes part of #18792.