Earlier, we didn't soft-reactivate users for group mentions
at all because it wasn't easy to calculate group size.
Now, we will soft reactivate if the user group mentions has
less than 12 members.
We don't reactivate all users because a user group can have a
very large size, which can lead to large backlogs in the
deferred-work queue.
Fixes part of #27586.
Fixes#28403
Uses redis to remember the last time push notifications were experienced
working. This needs to work across processes, so can't be done just in
memory.
As this is transient data that's fairly harmless to lose and thus
doesn't require the persistence benefits of the database, and we're
keeping a single "row", so don't need an entire new db table, we settle
on using redis instead of postgres. This is also consistent with how we
store other kinds of such transient data.
migrated views:
- `zilencer.views.register_remote_server`
- `zilencer.views.register_remote_push_device`
- `zilencer.views.unregister_remote_push_device`
- `zilencer.views.unregister_all_remote_push_devices`
- `zilencer.views.remote_server_notify_push`
to make sure the previous checks for `remote_server_notify_push` matches
to old one, The `RemoteServerNotificationPayload` is defined.
Replace the long string for organisations that have notification
body/content disabled (settings.PUSH_NOTIFICATION_REDACT_CONTENT
set to true) with "New message".
This allows more of the limited space on the mobile device screen to
be used for additional messages rather than this verbose content.
Fixes#29152
Adds a line to the top of the internal_billing_notice email with
the billing entity's display name.
Makes sure all internal_billng_notice email subjects also include
the billing entity's display name.
Makes small updates to the notice text for some cases.
When a server doesn't submit a remote realm info which was
previously submitted, we mark it as locally deleted.
If such a realm has paid plan attached to it, we should investigate.
This commit adds logic to send an email to sales@zulip.com for
investigation.
LoggingCountStats with a daily duration and that are directly stored
on the RealmCount table (not via aggregation in process_count_stat),
can be in a state, after the hourly cron job to update analytics
counts, where the logged value will be live-updated later, because
the end time for the stat is still in the future.
As these logging counts are designed to be used on the self-hosted
installation for either debugging or rate limiting, sending these
partial/incomplete counts to the bouncer has low value.
The previous logic incorrectly used the server-level number of users
even when a (presumably smaller) realm-level count was available.
Fixes a bug introduced in 2e1ed4431a.
RemoteRealm customer takes precedence over RemoteServer
in general. But if an inactive plan is associated with
RemoteRealm and an active plan with RemoteServer, the
ACTIVE plan takes precendence.
Co-authored-by: Prakhar Pratyush <prakhar@zulip.com>
The problem was that earlier this was just an uncaught JsonableError,
leading to a full traceback getting spammed to the admins.
The prior commit introduced a clear .code for this error on the bouncer
side, meaning the self-hosted server can now detect that and handle it
nicely, by just logging.error about it and also take the opportunity to
adjust the realm.push_notifications_... flags.
We return expected_end_timestamp as "None" for the plans to be
downgraded if number of users is not more than MAX_USERS_WITHOUT_PLAN
since they will be downgraded to self-managed plan and would
have push notifications enabled.
Requests to these endpoint are about a specified user, and therefore
also have a notion of the RemoteRealm for these requests. Until now
these endpoints weren't getting the realm_uuid value, because it wasn't
used - but now it is needed for updating .last_request_datetime on the
RemoteRealm.
For the RemoteRealm case, we can only set this in endpoints where the
remote server sends us the realm_uuid. So we're missing that for the
endpoints:
- remotes/push/unregister and remotes/push/unregister/all
- remotes/push/test_notification
This should be added in a follow-up commit.
This protects us from incorrectly handling situations where someone
tested and upgrade to 8.0 for a backup on a separate hostname, and
left the test system live while upgrading the main system, in a way
that results in duplicate RemoteRealm objects that are all marked as
locally deleted.
Further word is required to figure out how to avoid the original
duplication problem.
Old RemotePushDeviceTokens were created without this attribute. But when
processing a notification, if we have remote_realm, we can take the
opportunity to to set this for all the registrations for this user.
This moves the function which computes can_push and
expected_end_timestamp outside RemoteRealmBillingSession
because we might use this function for RemoteZulipServer
as well and also renames it.
This fixes the exception case on the initial
`/api/v1/remotes/server/analytics/status` case. Other exceptions from
`send_to_push_bouncer` are allowed to escape.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
This ensures determinism in these tests doing mock_send.assert_called
with - avoids producing test flakes due to a different order of
retrieval of these objects from the database.
- The server sends the list of registrations it believes to have with
the bouncer.
- The bouncer includes in the response the registrations that it doesn't
actually have and therefore the server should delete.
Given that most of the use cases for realms-only code path would
really like to upload audit logs too, and the others would likely
produce a better user experience if they upoaded audit logs, we
should just have a single main code path here i.e.
'send_analytics_to_push_bouncer'.
We still only upload usage statistics according to documented
option, and only from the analytics cron job.
The error handling takes place in 'send_analytics_to_push_bouncer'
itself.
When a self-hosted Zulip server does a data export and then import
process into a different hosting environment (i.e. not sharing the
RemoteZulipServer with the original, we'll have various things that
fail where we look up the RemoteRealm by UUID and find it but the
RemoteZulipServer it is associated with is the wrong one.
Right now, we ask user to contact support via an error page but
might develop UI to help user do the migration directly.