This is a general link for logging into the billing system on behalf of
a server, but it's tied to the .contact_email and takes the user
straight to the /deactivate/ page via the next_page mechanism.
This prep commit makes 'sync_license_ledger_if_needed'
function a 'BillingSession' abstract method.
We'll override the method for RemoteServerBillingSession
in the next commit.
This prep commit extracts out the code block that determines the
last license ledger for the customer plan having automanage_licenses
set to True into a new BillingSession method named
'get_last_ledger_for_automanaged_plan_if_exists'.
We'll be using this function while implementing the
'sync_license_ledger_if_needed' method for RemoteServerBillingSession.
We need to update 'last_audit_log_update' before calling the
'sync_license_ledger_if_needed' method to avoid 'MissingDataError'
due to 'has_stale_audit_log' being True.
Also, we made the code block that creates audit logs,
updates 'last_audit_log_update', and syncs LicenseLedger in
an atomic operation.
This helps to rely on 'last_audit_log_update' to assume
'RemoteRealmAuditLog' and 'LicenseLedger' are up-to-date.
We already calculate the correct `billed_licenses` early in the
function, so just used that to fix the bug where a legacy server
scheduled for upgrade doesn't respect the manual license count
set by the user.
As premonitioned in c741c527d7, it is
indeed possible for `get_handler_by_id` to error out by cause the
handler has been unset elsewhere.
Protect the callsites of `get_handler_by_id` to be able to gracefully
handle when the handler has already done away.
I missed a some places to check on last pass:
* For automanaged licenses when the license updates.
* When plan is changed.
* When migrating existing customers to legacy plan.
This fixes a bug introduced in
6f93ab72c0 where deactivating a realm
would fail with an exception that sessions cannot be cleared inside
database transactions.
If the exception was because the channel closed, attempting to NAK the
events will just raise another error, and is pointless, as the server
already marked the pending events as NAK'd.
4af00f61a8 claimed that `on_finish` and
`on_connection_close` were mutually exclusive. In cases where a
`DELETE` is called on the queue while a longpoll is in progress, this
can cause _both_ to happen:
- The `DELETE` pushes a `cleanup_queue` event, which triggers
`finish_handler` to begin pushing out an empty event response to the
longpoll connection.
- In the midst of that, in an `await`, the longpoll connection drops,
and `on_connection_close` clears the handler.
- The `await` resumes, calls `finish`, and attempts to clear the
handler.
The easiest solution is to make `clear_handler_by_id` tolerant to
multiple attempts to clear it. Since these processes run in parallel,
it means that parts may have a `handler_id` but `get_handler_by_id`
may error in attempting to look it up. We have not observed this in
testing, and I cannot currently prove it is impossible.
This ensures determinism in these tests doing mock_send.assert_called
with - avoids producing test flakes due to a different order of
retrieval of these objects from the database.