This is a follow-up to #19388.
We will in the future allow patch requests to change the visibility
of an existing topic, so `last_updated` is better name for this field.
This commit does not affect the API or events in any way, but only the
database.
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
The test now uses submit_reg_form_for_user, meaning a blank
full_name is posted to /accounts/register/ rather than the
parameter being excluded.
Fixes part of #7564
I had to pass stop_after_reg_form=True, as the call to get_user in
verify_signup fails. I am not sure whether this is the expected
behavior. Also this causes the test to use submit_reg_form_for_user,
meaning a blank password is posted to /accounts/register/ rather than
no password.
Fixes part of #7564
get_user_by_delivery_email should be used, given that the email variable
is the realm email address that the account is being created with, not
the .email field which can be a dummy address based on org settings.
Currently we used to redirect to /new when the user click on buy
standard from the root domain. Instead we redirect to /upgrade page.
The /upgrade page redirect would ask user to enter the subdomain
of their organization and would then redirect them to /upgrade
page of their organization.
This better matches the title of the page and more generally our
conventions around naming /help/ articles. We include a redirect
because this is referenced from Welcome Bot messages, and we
definitely don't want those links to break.
This parallels fe25517295, but for mobile notifications. It also
adds a test, which verifies that such content does not crash either
mobile or email notifications.
fe25517295 adjusted the email_notifications codepath to use
`lxml.html.fragment_fromstring` method when parsing
`rendered_content`, but left the tests using a helper which called
`fromstring`.
Switching the tests to match the code as run reveals a bug -- using
`drop_tree` on all `message_inline_image` classes now _does_ remove
all of a top-level image-URL-only message. Previously, such messages
were "safe" from the block that calls `drop_tree` only by dint of
`drop_tree` being a silent no-op for the root element. When parsed
using `fragment_fromstring`, they are no longer the root, and as such
an empty message results.
Reorder relative_to_full_url to check for only one `message_inline_image`
within the top `<div>`, and only run the `drop_tree` path in the
alternate case. Tests must be adjusted for their output now including
one more layer of `<div>`.
The previous commit introduced logging of attempts for username+password
backends. For completeness, we should log, in the same format,
successful attempts via social auth backends.
Our convention is to always have authenticate() called with a request
object. We need to be consistent with that in tests too, to avoid test
failures resulting from breaking that assumption.
We modify assert_login_failure to call client.login() in the same way as
the other similar helpers - with a properly initialized HttpRequest
instance.
Now, when we add a custom animated emoji to the realm
we also save a still image of it (1st frame of the gif). So
we can avoid showing an animated emoji every time.
create_confirmation_link has validity time as an optional argument,
because it has reasonable defaults. Thus it's a better API for
do_send_confirmation_email to make this optional as well, allowing
relying on create_confirmation_link's defaults.
This extends the invite api endpoints to handle an extra
argument, expiration duration, which states the number of
days before the invitation link expires.
For prereg users, expiration info is attached to event
object to pass it to invite queue processor in order to
create and send confirmation link.
In case of multiuse invites, confirmation links are
created directly inside do_create_multiuse_invite_link(),
For filtering valid user invites, expiration info stored in
Confirmation object is used, which is accessed by a prereg
user using reverse generic relations.
Fixes#16359.
The API for changing the batching period was added in
5db4fe8652.
This is a follow up to that commit. We also update the timestamps for
existing scheduled email notifications entries so that the effect of
changing the setting is immediate.
Part of #15280
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
These hostnames only have MX records for Mailgun and Front, and will
not work as a Zulip organization.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit moves check_settings_values to user_settings.py
from validator.py such that we can import the functions at
the top without any issue of cyclic imports.
We do not allow mentioning system user groups for now
because this can lead to circumventing the wildcard
mention restrictions. It will be enabled once we add
a setting to control that.
This is implemented by just ignoring it as one of the
mentioned user group even if the message content
inlcudes the mention syntax for it and the message
is sent normally.
We still keep the for_mention parameter for accessing
user group while sending email and push notifications
as mentioning system user groups will be allowed in
future.
This commit also removes the test for email notifications
for system user groups as we are not allowing mentioning
them.
This commit is only for backend change as we already
exclude the system groups from mention typeaheads and
other UI.
The name of the new realm created as a tombstone after renaming
a realm's subdomain is the constant 'placeholder-realm'.
This would confuse the user when shown the deactivation notice
and asking to join the realm at a new subdomain.
This PR replaces it with the original realm name to avoid confusion.
Fixes: #19677
This commit modifies the copy_user_settings code such that instead
of source user profile, we can have two types of sources - a user
profile and RealmUserDefault table of realm and then set the
settings from RealmUserDefault only is there is no user profile
as a source.
We also rename copy_user_settings to copy_default_settings for
clarity.
This commit adds do_set_realm_user_default_setting which
will be used to change the realm-level defaults of settings
for new users.
We also add a new event type "realm_user_settings_defaults"
for these settings and a "realm_user_settings_default" object
in '/register' response containing all the realm-level default
settings.
Because we create all realms with do_create_user (including in the
test suite), we just need to change that function, add a migration for
existing realms, and ensure the data import code path correctly
creates these objects.
Note that the import code path will create a RealmUserDefault row with
default values if it is not present in the import data, which is
important for importing data from other tools like Slack.
This commit changes the type of enable_marketing_emails parameter of
create_user to Optional[bool].
The value of this parameter will be None in certain cases when user
registers through SSO and 'TERMS_OF_SERVICE=False' when there will
be no registration form and thus no value of enable_marketing_emails.
We set the enable_marketing_emails setting after copying user
settings to override the value selected in registration form.
This change is also necessary because enable_marketing_emails
field is present in RealmUserDefault to avoid copying code
but we do not use this value actually and instead we want
the setting to be set according to the value in registration
form.
We set this setting only for non-bot users since we generally
do not set any settings for bots.
We extract the checks for default_language, notification_sound,
and email_notifications_batching_period_seconds setting values
in json_change_settings to a new function check_settings_values.
This prevented migration 0345
(517c2ed39d / #19696) from applying on
systems that were created after the refactoring that resulted in the
system bot realm potentially having null as its name.
(We've already confirmed that normal realms, created via
`do_create_realm`, shouldn't be able to have this unusual state).
This check was copied from upstream python-markdown's "safe mode"
before they removed that feature. The upstream history is that they
introduced this check in
2db5d1c8e4,
which was not a complete security check, and then added the
immediately following check (with an allowlist of schemes) in
0b4ffbb60e.
Their first, incomplete check provides no security benefit and makes
the code hard to reason about, so we remove it.
The 'update_global_notifications' type event is sent only for
existing settings and will not be sent for new settings, so we
should use notification_settings_legacy dict to check the type
of setting value in check_update_global_notifications instead
of notification_settings_types dict.
We still used notification_setting_types in copy_user_settings
function of create_user.py and in a test in test_event_system.py.
It is not required to do so since we have added all settings in
property_types already and we loop over property_types at both
these places which includes all settings.
This was likely initiall created with null=True in
5c5ffd6ea3 just because we didn't have a
plan for backfilling this field, but I verified that Zulip Cloud has
no realms without a name set, and that's the place most likely to have
any form of super-legacy nameless realms.
So we can clean up this aspect of the data model without a special
migration to do something with existing realms with name=None (which I
suspect would have resulted in a 500 anyway).
We already test all the notification settinsg in
test_toggling_boolean_display_settings (which is
now renamed to test_toggling_boolean_user_settings)
as all settings are now moved to property_types and
we are merging other parts also to consider all the
settings under one category.
This commit adds a separate test for invalid values of user settings
and remove the existing code for it in test_change_user_setting.
This change will enable us to merge the tests for notification
settings to this because email_notification_batching_period_settings
has different invalid values than other integer values and we do the
same for realm settings also.
This commit adds `demo_organization_scheduled_deletion_date` to
the `realm` section of the `/register` response so that it is
available to clients when enabled.
This is a part of #19523.
This fixes a regression where one could end up deactivating all owners
of a realm when trying to synchronize LDAP with the `is_realm_admin`
flag configured in `AUTH_LDAP_USER_FLAGS_BY_GROUP`.
With tweaks by tabbott to add is_moderator as well.
Fixes#18677.
Since 84742a0, all settings are sent in the `user_settings` dictionary
which were previously sent inline with other fields in /register
response.
In order to simplify the process of adding new personal settings, we
want to transition to a world where new settings only need to consider
the `property_types` object, and code that needs to reference the
legacy behavior interacts with an object with `legacy` in its name.
This way, contributors working on new settings don't need to think
about the legacy code paths at all.
See https://chat.zulip.org/#narrow/stream/378-api-design/topic/user.20settings.20response.20in.20.2Fregister
to understand this better.
The commit:
1. Adds the new field as nullable.
2. Adds code that'll create new Confirmation with the field set
correctly.
3. For verifying validity of Confirmation object this still uses the old
logic in get_object_from_key() to keep things functioning until we
backfill the old objects in the next step.
Thus this commit is deployable. Next we'll have a commit to run a
backfill migration.
An integer or no argument is supposed to be passed.
These weren't caught by mypy because booleans are integers in python,
see https://github.com/python/mypy/issues/1757
This will be used to check if the narrow being requested by
spectator requires authentication without requesting the server.
Having this check locally, makes this process look snappy to
the user and doesn't result in 404s in the browser log.
aioapns already has a retry loop. By default it retries forever on
ConnectionError and ConnectionClosed, so our own retry loop would
never be reached. Remove our retry loop, and configure aioapns to
retry APNS_MAX_RETRIES times on ConnectionError like the previous
version did. It still retries forever on ConnectionClosed; that’s not
configurable but probably fine.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This utilizes the generic `BaseNotes` we added for multipurpose
patching. With this migration as an example, we can further support
more types of notes to replace the monkey-patching approach we have used
throughout the codebase for type safety.
The motive of adding `BaseNotes` was to support monokey patching
temporary attributes to objects (such as `.trigger` on `Message`) when
working on the django-stubs migration in #18777.
This way, we no longer have to manually keep the upload path code in
sync with the upload path code in zerver/lib/upload.py.
This was originally suggested in
https://github.com/zulip/zulip/pull/19478#issuecomment-911479530.
This change fixes a bug when importing into a server using the local
file uploads backend, where the `import_realm.py` copy wasn't using
our standard 256-directory approach to avoid putting too many files in
a single directory.
de04f0ad67 changed now notifications recipients were calculated, in
a manner that caused them to be sent when they should not have been.
ac70a2d2e1 was supposed to resolve this, but appears to have been
insufficient, as all three of these cases have been observed to still
happen.
Add safety checks immediately before notification, until the
underlying logic error can be sussed out.
This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
We move the emojiset_choices method from UserProfile class to
UserBaseSettings class because emojiset_choices exists in
UserBaseSettings class and this would be used for realm-level
settings as well along with existing user-level settings.
We rename the user group in the example for 'GET /user_groups'
with is_system_group=True, to be 'Moderators' as is_system_group
will be set to True for role-based user groups only.
The default is kept as no retries. Since retries with exponential
backoff are a good thing to make easy, the int form defaults to
setting a backoff_factor.
Unfortunately, urllib3 retry backoff does not implement jitter.
Switching this to use the `backoff` library[1] rather than urllib3's
native Retry is left as future extension.
[1] https://pypi.org/project/backoff/
This adds the X-Smokescreen-Role header to proxy connections, to track
usage from various codepaths, and enforces a timeout. Timeouts were
kept consistent with their previous values, or set to 5s if they had
none previously.
This commits removes some unnecessary checks for `self.md.zulip_message`,
which were put there historically, as earlier we used to add the additional
properties like mentions_user_ids, alert_words, etc. to Message dict
only. These were later moved to MessageRenderingResult class in commit
75cea329b but the checks weren't removed.
This is important because while rendering the messages imported from
other chat tools (like Rocket.Chat), the Message dict is not passed to
the markdown, due to which the checks for `self.md.zerver_message` fails
and hence, things like user mentions, stream/topic mentions are not
rendered in the imported messages properly.
The `user_activity_interval` worker calls:
```python3
last = UserActivityInterval.objects.filter(user_profile=user_profile).order_by("-end")[0]
`````
Which results in a query like:
```sql
SELECT "zerver_useractivityinterval"."id", "zerver_useractivityinterval"."user_profile_id", "zerver_useractivityinterval"."start", "zerver_useractivityinterval"."end" FROM "zerver_useractivityinterval" WHERE "zerver_useractivityinterval"."user_profile_id" = 12345 ORDER BY "zerver_useractivityinterval"."end" DESC LIMIT 1
```
For users which have at least one matching row, this results in a
query plan like:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=0.078..0.078 rows=1 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=0.077..0.078 rows=1 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: 98
Planning Time: 0.059 ms
Execution Time: 0.088 ms
```
But for users that have just been created, with no matching rows, this
is considerably more expensive:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=10798.146..10798.146 rows=0 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=10798.145..10798.145 rows=0 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: (count of every single row in the table, redacted)
Planning Time: 0.053 ms
Execution Time: 10798.158 ms
```
Regular vacuuming can force the use of the index on `user_profile_id`
as long as there are few enough users, which is fast -- however, at
some point, the query planner decides that is insufficiently specific,
always chooses the effective-whole-table-scan.
Add an index on `(user_profile_id, end)`, which is expected to be
sufficiently specific that it is used even with large numbers of user
profiles.
Ref #19250.
The transforms called from `build_message_payload` use
`lxml.html.fromstring` to parse (and stringify, and re-parse) the HTML
generated by Markdown. However, this function fails if it is passed
an empty document. "empty" is broader than just the empty string; it
also includes any document made entirely out of control characters,
spaces, unpaired surrogates, U+FFFE, or U+FFFF, and so forth. These
documents would fail to parse, and raise a ParserError.
Using `lxml.html.fragment_fromstring` handles these cases, but does by
wrapping the contents in a <div> every time it is called. As such,
replacing each `fromstring` with `fragment_fromstring` would nest
another layer of `<div>`.
Instead of each of the helper functions re-parsing, modifying, and
stringifying the HTML, parse it once with `fragment_fromstring` and
pass around the parsed document to each helper, which modifies it
in-place. This adds one outer `<div>`, requiring minor changes to
tests and the prepend-sender functions.
The modification to add the sender is left using BeautifulSoup, as
that sort of transform is much less readable, and more fiddly, in raw
lxml.
Partial fix for #19559.
This is a roundabout way to appease a semgrep complaint about
‘error_msg = error_msg % (string_id,)’ while also improving the code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
9ac55a8cf6 introduced support for
batch updates to stories. However, that commit didn't skip label
removals, as we already do in non-batch story payloads. This led
to an exception for batch story update payloads where labels were
removed but none were added.
maybe_send_batched_emails handles batches of emails from different
users at once; as it processes each user's batch, it enqueues messages
onto the `email_senders` queue. If `handle_missedmessage_emails`
raises an exception when processing a single user's email, no events
are marked as handled -- including those that were already handled and
enqueued onto `email_senders`. This results in an increasing number
of users being sent repeated emails about the same missed messages.
Catch and log any exceptions when handling an individual user's
events. This guarantees forward progress, and that notifications are
sent at-most-once, not at-least-once.
This commit indicates that the realm_message_retention_days field can have
a special value, similar to its stream counterpart, and also explains how
the special value changed over different server versions.
With an extension from tabbott to double-enter the changelog entry.
Related discussion: https://chat.zulip.org/#narrow/stream/378-api-design/topic/realm_message_retention_days
The ability to use multiple ports has been removed a long time ago.
And the "optional" note in the help message is in fact incorrect
since `addrport` being `None` is not supported.
We do not allow any user to edit the system user groups (including
renaming, deleting, adding or removing members, etc.) from the
API. These user groups will change only by the code when a new
user is added or role of a user is changed.
This is implemented by rejecting access_user_group_by_id always
except the case when it is use to get the user group for sending
email and push notifications, as we would need to send notifications
to the mentioned user group.
We make the description parameter in create_user_group as keyword-only
to improve readability. We would also keep the is_system_group
parameter which will be added in future keyword-only.
Tuples cannot be deserialized from JSON.
While we do use these validators for other things, like event
dictionaries, we have migrated the API away from using those. The
last use was removed in 4f3d5f2d87
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These changes are all independent of each other; I just didn’t feel
like making dozens of commits for them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The auth attempt rate limit is quite low (on purpose), so this can be a
common scenario where a user asks their admin to reset the limit instead
of waiting. We should provide a tool for administrators to handle such
requests without fiddling around with code in manage.py shell.
Calling `email.save()` is only needed if we altered `email.address`;
it is unnecessary if we called `email.users.add(...)` which will have
done its own INSERT.
This fixes two bugs: the most obvious is that there is a race where a
ScheduledEmail object could be observed in the window between creation
and when users are added; this is a momentary instance when the object
has no users, but one that will resolve itself.
The more subtle is that .save() will, if no records were found to be
updated, _re-create_ the object as it exists in memory, using an
INSERT[1]. Thus, there is a race with `deliver_scheduled_emails`
between when the users are added, and when `email.save()` runs:
1. Web request creates ScheduledEmail object
2. Web request creates ScheduledEmailUsers object
3. deliver_scheduled_emails locks the former, preventing updates.
4. deliver_scheduled_emails deletes both objects, commits, releasing lock
5. Web request calls `email.save()`; UPDATE finds no rows, so it
re-creates the ScheduledEmail object.
6. Future deliver_scheduled_emails runs find a ScheduledEmail with no
attending ScheduledEmailUsers objects
Wrapping the logical creation of both of these in a single transaction
avoids both of these races.
[1] https://docs.djangoproject.com/en/3.2/ref/models/instances/#how-django-knows-to-update-vs-insert
Only clear_scheduled_emails previously took a lock on the users before
removing them; make deliver_scheduled_emails do so as well, by using
prefetch_related to ensure that the table appears in the SELECT. This
is not necessary for correctness, since all accesses of
ScheduledEmailUser first access the ScheduledEmail and lock it; it is
merely for consistency.
Since SELECT ... FOR UPDATE takes an UPDATE lock on all tables
mentioned in the SELECT, merely doing the prefetch is sufficient to
lock both tables; no `on=(...)` is needed to `select_for_update`.
This also does not address the pre-existing potential deadlock from
these two use cases, where both try to lock the same ScheduledEmail
rows in opposite orders.
No codepath except tests passes in more than one user_profile -- and
doing so is what makes the deduplication necessary.
Simplify the API by making it only take one user_profile id.
This fixes a bug where email notifications were sent for wildcard
mentions even if the `enable_offline_email_notifications` setting was
turned off.
This was because the `notification_data` class incorrectly considered
`wildcard_mentions_notify` as an indeoendent setting, instead of a wrapper
around `enable_offline_email_notifications` and `enable_offline_push_notifications`.
Also add a test for this case.
While the STREAM_LINK_REGEX and STREAM_TOPIC_LINK_REGEX
identifies the stream and topic mentions in the content
correctly (tested by printing out the matches), the
stream/topic mentions are still not linked to the
corresponding streams/topics for imported messages, as
a `zulip_message` instance is required for linking these
mentions to actual streams/topics (see `StreamPattern`
class in `markdown/__init__.py`) which is not provided
while processing the markdown for imported messages.
This commit updates both the stream-level and realm-level message
retention setting to use 'unlimited' instead of 'forever' to set
message retention setting to "retain messages forever".
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'message_retention_days' and others will
be removed separately.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.
Previously, the output would make it look like we sent an actual email
to the first user in the dry_run output, which is very confusing.
The `dry_run` code path already prints all the accounts that would
have been emailed at the end, so there's no reason to have this line
before the dry_run check.
Additionally, we move after the `get_connection` check because
failures at that stage shouldn't result in logging an attempt to send
an email.
This way we can stop reading as soon as we get to the body. Also,
send an Accept header, check that the request was actually successful,
use lxml.etree.iterparse instead of a broken hand-rolled state
machine, and support XHTML, all for negative 28 lines of code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While it should be an invariant that message.rendered_content is never
None for a row saved to the database, it is possible for that
invariant to be violated, likely including due to bugs in previous
versions of data import/export tools.
While it'd be ideal for such messages to be rendered to fix the
invariant, it doesn't make sense for this has_link migration to crash
because of such a corrupted row, so we apply the similar policy we
already have for rendered_content="".
We rework the landing page for companies in the same way we've
recently revamped the landing pages for other use cases.
This implementation unfortunately duplicates a lot of content from
/plans; we should clean that up at some point.
This reverts commit 1965584eec.
This syntax has a bad interaction with table syntax and needs to be
rethought.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Slack bot emails generated by us can be duplicate for two bots.
If such a case occur, append a counter to the email to make it
unique.
For maintaining the counter of duplicate emails and the final
email assigned to each bot, a class based approach is used with
static variables and static (class) methods. This keeps all the
data related to slack bot emails at the same place and easily
accessible from anywhere inside the module (without defining any
class object and passing it around).
Fixes: #16793
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
This is more efficient than get_lexer_by_name, since we don’t need to
instantiate the class just to get its name.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The BlockingChannel annotations in TornadoQueueClient were flat-out
wrong. BlockingChannel and Channel have no common base classes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is effectively a step closer to what was proposed in
https://github.com/zulip/zulip/pull/18678#discussion_r644490540 when
this code was written in #18678.
If the Customer object has neither of a Stripe id, nor any historical
plans, then there's no real billing association contained in the
existence of the Customer object, and it's safe to delete.
This fixes a regression in de04f0ad67.
We'll do a proper test in a follow-up commit; this is a quick fix to
make sure master works.
The emails will bounce, but it'll create all sorts of infrastructure
headaches.
We added "user_settings" object containing all the user settings in
previous commit. This commit modifies the code to send the existing
setting fields in the top-level object only if user_settings_object
client_capabilities field is False.
This commit adds "user_settings_object" field to
client_capabilities which will be used to determine
if the client needs 'update_display_settings' and
'update_global_notifications' event.
We send a event with type 'user_settings' on updating user's display
and notification settings.
The old event types - 'update_global_notifications' and
'update_display_settings', are still supported for backwards
compatibility.
We do not require separate tests for checking events when changing
"enable_drafts_synchronization" as we already do this in the display
settings test because this setting is included in property_types.
Return zulip_merge_base alongside zulip_version
in `/register`, `/event` and `/server_settings`
endpoint so that the value can be used by other
clients.
These were added at some point in the past, but were not complete, and
it makes sense to document the current feature level as and when they
become available, since clients should not use the drafts endpoints on
older feature levels.
The main reason why this is needed is because this seems to be
convention and because we can't easily test event creation without
doing this.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This commit allows to import the following from rocketchat:
* All users
* All public/private channels
* All teams and its public/private channels
* All discussion rooms as topics in their parent channel
* All the messages in all the channels
* All private conversations
* Reactions on messages (except for custom emojis)
* Mentions in messages (except @all, @here mentions)
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.