We already calculate the correct `billed_licenses` early in the
function, so just used that to fix the bug where a legacy server
scheduled for upgrade doesn't respect the manual license count
set by the user.
As premonitioned in c741c527d7, it is
indeed possible for `get_handler_by_id` to error out by cause the
handler has been unset elsewhere.
Protect the callsites of `get_handler_by_id` to be able to gracefully
handle when the handler has already done away.
I missed a some places to check on last pass:
* For automanaged licenses when the license updates.
* When plan is changed.
* When migrating existing customers to legacy plan.
This fixes a bug introduced in
6f93ab72c0 where deactivating a realm
would fail with an exception that sessions cannot be cleared inside
database transactions.
If the exception was because the channel closed, attempting to NAK the
events will just raise another error, and is pointless, as the server
already marked the pending events as NAK'd.
4af00f61a8 claimed that `on_finish` and
`on_connection_close` were mutually exclusive. In cases where a
`DELETE` is called on the queue while a longpoll is in progress, this
can cause _both_ to happen:
- The `DELETE` pushes a `cleanup_queue` event, which triggers
`finish_handler` to begin pushing out an empty event response to the
longpoll connection.
- In the midst of that, in an `await`, the longpoll connection drops,
and `on_connection_close` clears the handler.
- The `await` resumes, calls `finish`, and attempts to clear the
handler.
The easiest solution is to make `clear_handler_by_id` tolerant to
multiple attempts to clear it. Since these processes run in parallel,
it means that parts may have a `handler_id` but `get_handler_by_id`
may error in attempting to look it up. We have not observed this in
testing, and I cannot currently prove it is impossible.
This ensures determinism in these tests doing mock_send.assert_called
with - avoids producing test flakes due to a different order of
retrieval of these objects from the database.
- The server sends the list of registrations it believes to have with
the bouncer.
- The bouncer includes in the response the registrations that it doesn't
actually have and therefore the server should delete.
This commit creates a RealmAuditlog entry with a new event_type
'RealmAuditLog.REALM_IMPORTED' after the realm is reactivated.
It contains user count data (using realm_user_count_by_role)
stored in extra_data.
This helps to have an accurate user count data for the billing
system if someone tries to signup just after doing an import.
This partially reverts 579bdc18f85ea8599c8cf1f53ddb02fd41d97993; it
assumed (based on its documentation) that `on_finish` was called for
all requests, even client-terminated ones. This is not accurate; it
is only called when the request calls `finish`, which only happens for
successful requests. This caused every client-closed connection to
leak a handler (ironically, exactly re-introducing the bug previously
fixed in 12a5a3a6e1).
This behaviour was obscured by the development environment's proxy;
see comment added in the previous commit.
Instead of replacing the `clear_handler_by_id` call into
`ClientDescriptor.disconnect_handler`, we instead place it on
`AsyncDjangoHandler.on_connection_close`. This is more correct for
a few reasons:
- `on_connection_close` will be called if the client goes away during
a request without a client descriptor. If the handler garbage
collection of handlers runs inside the ClientDescriptor, we leak
handlers.
- `disconnect_handler` also runs when successfully sending an event,
which already calls `on_finish`. We avoid double-calling
`clear_handler_by_id` by doing it in two clearly exclusive cases,
`on_finish` and `on_connection_close`.
- It combines the creation and garbage collection logic into one
file, decreasing action at a distance which causes memory leaks.
We call 'send_server_data_to_push_bouncer' just after registering
server for push notification.
This helps to have a current state of the user counts when first
logging in after the RemoteRealm flow.
Actions that change the number of user counts adds a deferred_work
queue processor job immediately update the billing service about your
change.
This helps to avoid having users see stale state for how many
users they have when trying to pay.
This is a rename of the previous
enqueue_register_realm_with_push_bouncer_if_needed but is clearer
about the fact that this will also upload audit logs if available.
Given that most of the use cases for realms-only code path would
really like to upload audit logs too, and the others would likely
produce a better user experience if they upoaded audit logs, we
should just have a single main code path here i.e.
'send_analytics_to_push_bouncer'.
We still only upload usage statistics according to documented
option, and only from the analytics cron job.
The error handling takes place in 'send_analytics_to_push_bouncer'
itself.
This is the only operating editing audit logs not already using a
transaction, and having it do so will simplify an upcoming interface
to be able to assume it is always inside a transaction.