- The server sends the list of registrations it believes to have with
the bouncer.
- The bouncer includes in the response the registrations that it doesn't
actually have and therefore the server should delete.
When a self-hosted Zulip server does a data export and then import
process into a different hosting environment (i.e. not sharing the
RemoteZulipServer with the original, we'll have various things that
fail where we look up the RemoteRealm by UUID and find it but the
RemoteZulipServer it is associated with is the wrong one.
Right now, we ask user to contact support via an error page but
might develop UI to help user do the migration directly.
The way the flow goes now is this:
1. The user initiaties login via "Billing" in the gear menu.
2. That takes them to `/self-hosted-billing/` (possibly with a
`next_page` param if we use that for some gear menu options).
3. The server queries the bouncer to give the user a link with a signed
access token.
4. The user is redirected to that link (on `selfhosting.zulipchat.com`).
Now we have two cases, either the user is logging in for the first time
and already did in the past.
If this is the first time, we have:
5. The user is asked to fill in their email in a form that's shown,
pre-filled with the value provided inside the signed access token.
They POST this to the next endpoint.
6. The next endpoint sends a confirmation email to that address and asks
the user to go check their email.
7. The user clicks the link in their email is taken to the
from_confirmation endpoint.
8. Their initial RemoteBillingUser is created, a new signed link like in
(3) is generated and they're transparently taken back to (4),
where now that they have a RemoteBillingUser, they're handled
just like a user who already logged in before:
If the user already logged in before, they go straight here:
9. "Confirm login" page - they're shown their information (email and
full_name), can update
their full name in the form if they want. They also accept ToS here
if necessary. They POST this form back to
the endpoint and finally have a logged in session.
10. They're redirected to billing (or `next_page`) now that they have
access.
For the last form (with Full Name and ToS consent field), this pretty
shamelessly re-uses and directly renders the
corporate/remote_realm_billing_finalize_login_confirmation.html
template. That's probably good in terms of re-use, but calls for a
clean-up commit that will generalize the name of this template and the
classes/ids in the HTML.
When a remote server uploads statistics, we update the
LicenseLedger using the audit logs uploaded.
We iterate over the RemoteRealmAuditlog data for the concerned
realm starting from the event_time of the last LicenseLedger
created for that customer and update the ledger based on each event.
If the RemoteRealmAuditLog has stale data, it means the server
stopped or never uploaded data. We raise MissingDataError in such
cases when a user action led to calculating licenses count from
stale data.
We add a 'get_remote_realm_guest_and_non_guest_count'
function that queries 'RemoteRealmAuditLog' to get
the guest and non_guest count for that remote_realm.
This function is used in 'RemoteRealmBillingSession'
to calculate the current count of billed licenses.
In cloud:
Sponsored organizations have plan_type=STANDARD_FREE, don't have
a CustomerPlan object and thus no tier value.
With self-hosting:
Sponsored organizations have a CustomerPlan object with tier
TIER_SELF_HOSTED_COMMUNITY and a plan_type of PLAN_TYPE_COMMUNITY.
As of c9b0602320 and
8b55d60f9e we create a default
registration for the dev env "RemoteZulipServer" (and also RemoteRealms
for the dev realms) in populate_db. However these models weren't listed
in clear_database, meaning the state wasn't properly cleaned up at the
start of the command.
Most important, this would manifest in getting:
```
django.db.utils.IntegrityError: duplicate key value violates unique
constraint "zilencer_remotezulipserver_uuid_key"
DETAIL: Key (uuid)=(......) already exists
```
if you re-run populate_db.
1. When we get data and it includes realm info, we should automatically
link the new records with the appropriate RemoteRealm.
2. For old records, when we receive realm data, we have an opportunity
to update those old record to link them to the right RemoteRealm.
This logic doesn't need to always run, just after a remote server
upgrade, since that's when this shift in remote server behavior will
occur.
This creates a valid registration, for two reasons:
1. Avoid the need to run "manage.py register_server" in dev env to
register, when wanting to to test stuff with
`PUSH_NOTIFICATION_BOUNCER_URL = "http://localhost:9991"`.
2. Avoid breaking RemoteRealm syncing, due to duplicate registrations
(first set of registrations that gets set up with the dummy
RemoteZulipServer in populate_db, and the second that gets set up via
the regular syncing mechanism with the new RemoteZulipServer created
during register_server).
These names were picked when I still thought these endpoints would serve
both the RemoteRealm and RemoteZulipServer based flows. Now that it's
known these are RemoteRealm-only endpoints, the _server in the names no
longer makes sense.
This commit adds two columns named 'Guest users' and
'Non guest users' to respresent count of such users.
We query 'RemoteRealmAuditLog' to get the data.
Also adds `SWITCH_PLAN_TIER_AT_PLAN_END` for `CustomerPlan`
which will be used to mark status of remote server legacy
plans which are scheduled for an upgrade.
This is a prep commit to return, for each remote realm, the 'uuid',
'can_push', and 'expected_end_timestamp'.
This data will be used in 'initialize_push_notifications'.
This consists of the following pieces:
1. Makes servers using the bouncer send realm_uuid in requests for token
registration. (Sidenote: realm_uuid is already sent in the "send
notification" codepath as of
48db4bf854)
2. This allows the bouncer to tie RemotePushDeviceToken to the
RemoteRealm with matching realm_uuid at registration time.
3. Introduce handling of some potential weird edge cases around the
realm_uuid and RemoteRealm objects in get_remote_realm_helper.
This default setup will be more realistic, matching the ordinary
conditions for a modern server.
Especially needed as we add bouncer code that will expect to have
RemoteRealm entries for realm_uuid values for which it receives
requests.
The recent #27818 naïvely added unique indexes, despite there being a
large number of existing violations. This makes the migration
impossible to deploy.
Update the migration to de-duplicate rows, dropping all but the
first-by-id of each unique set. This is equivalent to what
dd954749be does with `ignore_conflicts`. We update the migration,
rather than making a new one, as any server which has somehow
successfully applied the migration apparently did not need to
de-duplicate anything.
Moves and generalizes `switch_realm_from_standard_to_plus_plan`
in stripe.py to be a more general function for changing a
CustomerPlan to a new and valid tier, `do_change_plan_to_new_tier`.
Adds a helper function with the previous function name to be used
for the support view and management command for changing a realm
from the Standard plan tier to the Plus plan tier.
This makes it possible for a self-hosted realm administrator to
directly access a logged-page on the push notifications bouncer
service, enabling billing, support contacts, and other administrator
for enterprise customers to be managed without manual setup.
This may happen if there are multiple servers with the same UUID
submitting data (e.g. if they were cloned after initial creation), or
if there is one server, but `./manage.py clear_analytics_tables` was
used to truncate the analytics tables.
In the case of `clear_analytics_tables`, the data submitted likely has
identical historical values with new remote `id` values; preserving
the originally-submitted contemporaneous data is the best option. For
the case of submissions from multiple servers, there is no completely
sensible outcome, so the best we can do is detect the case and move
on.
Since we have a lock on the RemoteZulipServer, we know that no other
inserts are happening, so counting before and after will return the
true number of rows inserted (which `bulk_create` cannot do in the
face of `ignore_conflicts`[^1]). We compare this to the expected
number of new inserted rows to detect dropped duplicates.
[^1]: See https://code.djangoproject.com/ticket/30138.
This does not ensure that we do not mix data from multiple servers
sharing a UUID -- if one has more `RemoteRealmCount` rows,
and the other has more `RemoteInstalltionCount` rows, the end result
will still be some rows from each server, across the two tables.
It does ensure that we will not alternate rows between two servers
if both requests are processed at the same time.
It also causes submissions to be all-or-nothing in the event of
integrity errors. This is not necessarily beneficial, as forward
progress is generally useful -- but the integrity errors are resolved
in the subsequent commit.
This applies f299f31340 but for the push bouncer receiving side.
This is particularly important as we start relying on the unique
constraints, via `ON CONFLICT ... IGNORE`, in subsequent commits.
Fixes: #12362.
Add the new model for recording basic information about Realms on remote
server, to go with the other analytics data. Also adds necessary changes
to the bouncer endpoint and the send_analytics_to_push_bouncer()
function to submit such Realm information.
This calculates the largest amount of messages sent within a month for
the last 3 months. The query is targeted for the specific use-case in
this function - for finding the count for a specific server. For
calculating this in bulk for a large number of remote server an
adapted, bulk query will be needed - rather than running this one in a
loop, which would likely be very inefficient.
This parameter appeared here on the function definition,
but because it lacked a `REQ` call it didn't actually connect
to any parameter passed in the HTTP request.
It doesn't make any sense on this endpoint anyway -- presumably
it was copy-pasted from its "register" counterpart -- so just cut it.
We'll need this information in order to properly direct APNs
notifications. Happily, the Zulip server always sends it when
registering an APNs token; and it appears it always has done so
since the commit:
cddee49e7 Add support infrastructure for push notification bouncer service.
back in 2016. So there's no compatibility issue from requiring it.
This missing `REQ` call has meant we just drop this parameter:
even though the remote Zulip server passes it (for all APNs tokens),
we never notice and never store it. Fix that.
So that all child classes of BillingSession generate the same data
structure for customers that are created in Stripe, revise
`get_data_for_stripe_customer` to return a specific dataclass:
StripeCustomerData.
So that `update_or_create_stripe_customer` can work for Customer
objects with either a realm or remote_server, we create an abstract
base class, BillingSession, and implement a child class for the
current implementation of Customer objects with a realm.
Refactoring `update_or_create_stripe_customer` also moves
`create_stripe_customer` and `replace_payment_method` to the
BillingSession class.
Earlier, the 'automatically_follow_topics_policy' and
'automatically_unmute_topics_in_muted_streams_policy' were set
to 'NEVER' for both the test and dev databases.
This commit fixes the incorrect behavior.
Now, we set it to 'NEVER' only for the test-database.
We need the actual default value in our dev database.
We explicitly set it to NEVER for test-database as it skews other
tests by generating extra events and db queries. We have separate
tests with different values to test the intended behavior related
to these settings.
The bug was introduced in 58568a6.
Add an optional `automatic_new_visibility_policy` enum field
in the success response to indicate the new visibility policy
value due to the `automatically_follow_topics_policy` and
`automatically_unmute_topics_in_muted_streams_policy` user settings
during the send message action.
Only present if there is a change in the visibility policy.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
There is no reason to have an index on just `realm_id` or `remote_id`,
as those values mean nothing outside of the scope of a specific
`server_id`. Remove those never-used single-column indexes from the
two tables that have them.
By contrast, the pair of `server_id` and `remote_id` is quite useful
and specific -- it is a unique pair, and every POST of statistics from
a remote host requires looking up the highest `remote_id` for a given
`server_id`, which (without this index) is otherwise a quite large
scan.
Add a unique constraint, which (in PostgreSQL) is implemented as a
unique index.