This commit adds 'durable=True' to the outermost transactions
of the following functions:
* do_create_multiuse_invite_link
* do_revoke_user_invite
* do_revoke_multi_use_invite
* sync_ldap_user_data
* do_reactivate_remote_server
* do_deactivate_remote_server
* bulk_handle_digest_email
* handle_customer_migration_from_server_to_realm
* add_reaction
* remove_reaction
* deactivate_user_group
It helps to avoid creating unintended savepoints in the future.
This is as a part of our plan to explicitly mark all the
transaction.atomic decorators with either 'savepoint=False' or
'durable=True' as required.
* 'savepoint=True' is used in special cases.
Because Django does not support returning the inserted row-ids with a
`bulk_create(..., ignore_conflicts=True)`, we previously counted the
total rows before and after insertion. This is rather inefficient,
and can lead to database contention when many servers are reporting
statistics at once.
Switch to reaching into the private `_insert` method, which does
support what we need. While relying a private method is poor form, it
is mildly preferable to attempting to re-implement all of the
complexities of it.
migrated views:
- `zilencer.views.register_remote_server`
- `zilencer.views.register_remote_push_device`
- `zilencer.views.unregister_remote_push_device`
- `zilencer.views.unregister_all_remote_push_devices`
- `zilencer.views.remote_server_notify_push`
to make sure the previous checks for `remote_server_notify_push` matches
to old one, The `RemoteServerNotificationPayload` is defined.
The IntegrityError shows up in the database logs, which looks
unnecessarily concerning. Use `ON CONFLICT IGNORE` to mark this as
expected, especially since the return value is never used.
As explained in the comment, when we're moving the server plan to the
remote realm's Customer object, the realm Customer may not have
stripe_customer_id set and therefore that value needs to get moved from
the server Customer.
When a server doesn't submit a remote realm info which was
previously submitted, we mark it as locally deleted.
If such a realm has paid plan attached to it, we should investigate.
This commit adds logic to send an email to sales@zulip.com for
investigation.
We no longer want to migrate Legacy plans from server to realms, since
Legacy plans are not really a thing in the original sense anymore, since
February 15th.
Now they're just a tool to give temporary extensions of access to the
push notification service for users, when needed. And as such, it makes
no sense to migrate like that.
The remaining code in this function is for migrating (any) plan from the
server object to the realm object, if the server has just a single
realm.
The logic in the case where there's only one realm and the function
tries to migrate the server's plan to it, had two main unhandled edge
cases that would throw exceptions:
1.
```
remote_realm = RemoteRealm.objects.get(
uuid=realm_uuids[0], plan_type=RemoteRealm.PLAN_TYPE_SELF_MANAGED
)
```
This could throw an exception if the RemoteRealm exists, but has an
active e.g. Legacy plan. Then there'd be no object matching the
plan_type in the query, raising RemoteRealm.DoesNotExist.
2. If the RemoteRealm had e.g. a Legacy plan in the past, that's now
expired, then it'd have a Customer object. Meaning that the attempt
to move the server's customer to the realm:
`server_plan.customer = remote_realm_customer`
would trigger an IntegrityError since a RemoteRealm can't have two
Customer objects.
In simple cases the situation in (2) can still be easily migrated, by
moving the plan from the server's customer to the realm's customer.
Just like deactivated realms should be excluded, so should locally
deleted realms.
In particular, failure to exclude locally deleted realms breaks
handle_customer_migration_from_server_to_realms.
This was a bug from 4715a058b0 where this
was just incorrectly called. get_realms_info_for_push_bouncer() is a
function meant to be called on a self-hosted server - and this
handle_... call happens on the bouncer. Therefore this returns all
zulipchat realms in product.
With the way, handle_... is being called right now, there's no reason
for it to have an argument for passing a list of realms. It should just
fetch the relevant RemoteRealm entries by itself, given the server arg.
Requests to these endpoint are about a specified user, and therefore
also have a notion of the RemoteRealm for these requests. Until now
these endpoints weren't getting the realm_uuid value, because it wasn't
used - but now it is needed for updating .last_request_datetime on the
RemoteRealm.
For the RemoteRealm case, we can only set this in endpoints where the
remote server sends us the realm_uuid. So we're missing that for the
endpoints:
- remotes/push/unregister and remotes/push/unregister/all
- remotes/push/test_notification
This should be added in a follow-up commit.
Earlier, the 'handle_customer_migration_from_server_to_realms'
function was called during the send analytics step.
It resulted in an error for customers having multiple Zulip servers,
one for testing and the others for not-testing, sharing a
push bouncer registration.
The migration step when run in a test instance caused customers to
have their legacy plan migrated to a test realm, resulting in them
losing their legacy plan.
This commit moves the migration step to run during plan management
login step. This reduces the chances of losing legacy
plan as we expect them to only verify that 8.0 upgrade works and
not bother trying to login to plan management from their test instance.
This protects us from incorrectly handling situations where someone
tested and upgrade to 8.0 for a backup on a separate hostname, and
left the test system live while upgrading the main system, in a way
that results in duplicate RemoteRealm objects that are all marked as
locally deleted.
Further word is required to figure out how to avoid the original
duplication problem.
It seems most correct to answer the question about whether push
notifications are working specifically for the exact set of realms
that the server self-reported to us in fact exist.
Sending data on any additional realms that were not referenced in the
request (if that's somehow possible without them being locally
deleted) is likely to only be confusing.
And the client should reasonably be able to expect to get a response
covering exactly the realms it told us about.
Old RemotePushDeviceTokens were created without this attribute. But when
processing a notification, if we have remote_realm, we can take the
opportunity to to set this for all the registrations for this user.
This moves the function which computes can_push and
expected_end_timestamp outside RemoteRealmBillingSession
because we might use this function for RemoteZulipServer
as well and also renames it.