For the last form (with Full Name and ToS consent field), this pretty
shamelessly re-uses and directly renders the
corporate/remote_realm_billing_finalize_login_confirmation.html
template. That's probably good in terms of re-use, but calls for a
clean-up commit that will generalize the name of this template and the
classes/ids in the HTML.
When a remote server uploads statistics, we update the
LicenseLedger using the audit logs uploaded.
We iterate over the RemoteRealmAuditlog data for the concerned
realm starting from the event_time of the last LicenseLedger
created for that customer and update the ledger based on each event.
If the RemoteRealmAuditLog has stale data, it means the server
stopped or never uploaded data. We raise MissingDataError in such
cases when a user action led to calculating licenses count from
stale data.
We add a 'get_remote_realm_guest_and_non_guest_count'
function that queries 'RemoteRealmAuditLog' to get
the guest and non_guest count for that remote_realm.
This function is used in 'RemoteRealmBillingSession'
to calculate the current count of billed licenses.
In cloud:
Sponsored organizations have plan_type=STANDARD_FREE, don't have
a CustomerPlan object and thus no tier value.
With self-hosting:
Sponsored organizations have a CustomerPlan object with tier
TIER_SELF_HOSTED_COMMUNITY and a plan_type of PLAN_TYPE_COMMUNITY.
As of c9b0602320 and
8b55d60f9e we create a default
registration for the dev env "RemoteZulipServer" (and also RemoteRealms
for the dev realms) in populate_db. However these models weren't listed
in clear_database, meaning the state wasn't properly cleaned up at the
start of the command.
Most important, this would manifest in getting:
```
django.db.utils.IntegrityError: duplicate key value violates unique
constraint "zilencer_remotezulipserver_uuid_key"
DETAIL: Key (uuid)=(......) already exists
```
if you re-run populate_db.
1. When we get data and it includes realm info, we should automatically
link the new records with the appropriate RemoteRealm.
2. For old records, when we receive realm data, we have an opportunity
to update those old record to link them to the right RemoteRealm.
This logic doesn't need to always run, just after a remote server
upgrade, since that's when this shift in remote server behavior will
occur.
This creates a valid registration, for two reasons:
1. Avoid the need to run "manage.py register_server" in dev env to
register, when wanting to to test stuff with
`PUSH_NOTIFICATION_BOUNCER_URL = "http://localhost:9991"`.
2. Avoid breaking RemoteRealm syncing, due to duplicate registrations
(first set of registrations that gets set up with the dummy
RemoteZulipServer in populate_db, and the second that gets set up via
the regular syncing mechanism with the new RemoteZulipServer created
during register_server).
These names were picked when I still thought these endpoints would serve
both the RemoteRealm and RemoteZulipServer based flows. Now that it's
known these are RemoteRealm-only endpoints, the _server in the names no
longer makes sense.
This commit adds two columns named 'Guest users' and
'Non guest users' to respresent count of such users.
We query 'RemoteRealmAuditLog' to get the data.
Also adds `SWITCH_PLAN_TIER_AT_PLAN_END` for `CustomerPlan`
which will be used to mark status of remote server legacy
plans which are scheduled for an upgrade.
This is a prep commit to return, for each remote realm, the 'uuid',
'can_push', and 'expected_end_timestamp'.
This data will be used in 'initialize_push_notifications'.
This consists of the following pieces:
1. Makes servers using the bouncer send realm_uuid in requests for token
registration. (Sidenote: realm_uuid is already sent in the "send
notification" codepath as of
48db4bf854)
2. This allows the bouncer to tie RemotePushDeviceToken to the
RemoteRealm with matching realm_uuid at registration time.
3. Introduce handling of some potential weird edge cases around the
realm_uuid and RemoteRealm objects in get_remote_realm_helper.
This default setup will be more realistic, matching the ordinary
conditions for a modern server.
Especially needed as we add bouncer code that will expect to have
RemoteRealm entries for realm_uuid values for which it receives
requests.
The recent #27818 naïvely added unique indexes, despite there being a
large number of existing violations. This makes the migration
impossible to deploy.
Update the migration to de-duplicate rows, dropping all but the
first-by-id of each unique set. This is equivalent to what
dd954749be does with `ignore_conflicts`. We update the migration,
rather than making a new one, as any server which has somehow
successfully applied the migration apparently did not need to
de-duplicate anything.
Moves and generalizes `switch_realm_from_standard_to_plus_plan`
in stripe.py to be a more general function for changing a
CustomerPlan to a new and valid tier, `do_change_plan_to_new_tier`.
Adds a helper function with the previous function name to be used
for the support view and management command for changing a realm
from the Standard plan tier to the Plus plan tier.
This makes it possible for a self-hosted realm administrator to
directly access a logged-page on the push notifications bouncer
service, enabling billing, support contacts, and other administrator
for enterprise customers to be managed without manual setup.
This may happen if there are multiple servers with the same UUID
submitting data (e.g. if they were cloned after initial creation), or
if there is one server, but `./manage.py clear_analytics_tables` was
used to truncate the analytics tables.
In the case of `clear_analytics_tables`, the data submitted likely has
identical historical values with new remote `id` values; preserving
the originally-submitted contemporaneous data is the best option. For
the case of submissions from multiple servers, there is no completely
sensible outcome, so the best we can do is detect the case and move
on.
Since we have a lock on the RemoteZulipServer, we know that no other
inserts are happening, so counting before and after will return the
true number of rows inserted (which `bulk_create` cannot do in the
face of `ignore_conflicts`[^1]). We compare this to the expected
number of new inserted rows to detect dropped duplicates.
[^1]: See https://code.djangoproject.com/ticket/30138.
This does not ensure that we do not mix data from multiple servers
sharing a UUID -- if one has more `RemoteRealmCount` rows,
and the other has more `RemoteInstalltionCount` rows, the end result
will still be some rows from each server, across the two tables.
It does ensure that we will not alternate rows between two servers
if both requests are processed at the same time.
It also causes submissions to be all-or-nothing in the event of
integrity errors. This is not necessarily beneficial, as forward
progress is generally useful -- but the integrity errors are resolved
in the subsequent commit.
This applies f299f31340 but for the push bouncer receiving side.
This is particularly important as we start relying on the unique
constraints, via `ON CONFLICT ... IGNORE`, in subsequent commits.
Fixes: #12362.
Add the new model for recording basic information about Realms on remote
server, to go with the other analytics data. Also adds necessary changes
to the bouncer endpoint and the send_analytics_to_push_bouncer()
function to submit such Realm information.
This calculates the largest amount of messages sent within a month for
the last 3 months. The query is targeted for the specific use-case in
this function - for finding the count for a specific server. For
calculating this in bulk for a large number of remote server an
adapted, bulk query will be needed - rather than running this one in a
loop, which would likely be very inefficient.
This parameter appeared here on the function definition,
but because it lacked a `REQ` call it didn't actually connect
to any parameter passed in the HTTP request.
It doesn't make any sense on this endpoint anyway -- presumably
it was copy-pasted from its "register" counterpart -- so just cut it.
We'll need this information in order to properly direct APNs
notifications. Happily, the Zulip server always sends it when
registering an APNs token; and it appears it always has done so
since the commit:
cddee49e7 Add support infrastructure for push notification bouncer service.
back in 2016. So there's no compatibility issue from requiring it.
This missing `REQ` call has meant we just drop this parameter:
even though the remote Zulip server passes it (for all APNs tokens),
we never notice and never store it. Fix that.
So that all child classes of BillingSession generate the same data
structure for customers that are created in Stripe, revise
`get_data_for_stripe_customer` to return a specific dataclass:
StripeCustomerData.
So that `update_or_create_stripe_customer` can work for Customer
objects with either a realm or remote_server, we create an abstract
base class, BillingSession, and implement a child class for the
current implementation of Customer objects with a realm.
Refactoring `update_or_create_stripe_customer` also moves
`create_stripe_customer` and `replace_payment_method` to the
BillingSession class.
Earlier, the 'automatically_follow_topics_policy' and
'automatically_unmute_topics_in_muted_streams_policy' were set
to 'NEVER' for both the test and dev databases.
This commit fixes the incorrect behavior.
Now, we set it to 'NEVER' only for the test-database.
We need the actual default value in our dev database.
We explicitly set it to NEVER for test-database as it skews other
tests by generating extra events and db queries. We have separate
tests with different values to test the intended behavior related
to these settings.
The bug was introduced in 58568a6.
Add an optional `automatic_new_visibility_policy` enum field
in the success response to indicate the new visibility policy
value due to the `automatically_follow_topics_policy` and
`automatically_unmute_topics_in_muted_streams_policy` user settings
during the send message action.
Only present if there is a change in the visibility policy.
This commit adds two user settings, named
* `automatically_follow_topics_policy`
* `automatically_unmute_topics_in_muted_streams_policy`
The settings control the user's preference on which topics they
will automatically 'follow' or 'unmute in muted streams'.
The policies offer four options:
1. Topics I participate in
2. Topics I send a message to
3. Topics I start
4. Never (default)
There is no support for configuring the settings through the UI yet.
There is no reason to have an index on just `realm_id` or `remote_id`,
as those values mean nothing outside of the scope of a specific
`server_id`. Remove those never-used single-column indexes from the
two tables that have them.
By contrast, the pair of `server_id` and `remote_id` is quite useful
and specific -- it is a unique pair, and every POST of statistics from
a remote host requires looking up the highest `remote_id` for a given
`server_id`, which (without this index) is otherwise a quite large
scan.
Add a unique constraint, which (in PostgreSQL) is implemented as a
unique index.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The curl examples of reordering linkifiers require there to be some
linkifiers in the database to be reordered. This adjusts some test cases
so they do not assume that there is no linkifier in the test db.
This commit updates the select_related calls in queries to
get UserProfile objects in populate_db code to pass "realm"
as argument to select_related call.
Also, note that "realm" is the only non-null foreign key field
in UserProfile object, so select_related() was only fetching
realm object previously as well. But we should still pass "realm"
as argument in select_related call so that we can make sure that
only required fields are selected in case we add more foreign
keys to UserProfile in future.
Translators benefit from the extra information in the field names, and
need the reordering freedom that isn’t available with multiple
positional fields.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The previous function was poorly named, asked for a
Realm object when realm_id sufficed, and returned a
tuple of strings that had different semantics.
I also avoid calling it duplicate times in a couple
places, although it was probably rarely the case that
both invocations actually happened if upstream
validations were working.
Note that there is a TypedDict called EmojiInfo, so I
chose EmojiData here. Perhaps a better name would be
TinyEmojiData or something.
I also simplify the reaction tests with a verify
helper.
This migration is reasonably complex because of various anomalies in existing
data.
Note that there are cases when extra_data does not contain data that is
proper json with possibly single quotes. Thus we need to use
"ast.literal_eval" to cover that.
There is also a special case for "event_type == USER_FULL_NAME_CHANGED",
where extra_data is a plain str. This event_type is only used for
RealmAuditLog, so the zilencer migration script does not need to handle
it.
The migration does not handle "event_type == REALM_DISCOUNT_CHANGED"
because ast.literal_eval only allow Python literals. We expect the admin
to populate the jsonified extra_data for extra_data_json manually
beforehand.
This chunks the backfilling migration to reduce potential block time.
The migration for zilencer is mostly similar to the one for zerver; except that
the backfill helper is added in a wrapper and unrelated events are
removed.
**Logging and error recovery**
We print out a warning when the extra_data_json field of an entry
would have been overwritten by a value inconsistent with what we derived
from extra_data. Usually this only happens when the extra_data was
corrupted before this migration. This prevents data loss by backing up
possibly corrupted data in extra_data_json with the keys
"inconsistent_old_extra_data" and "inconsistent_old_extra_data_json".
More roundtrips to the database are needed for inconsistent data, which are
expected to be infrequent.
This also outputs messages when there are audit log entries with decimals,
indicating that such entries are not backfilled. Do note that audit log
entries with decimals are not populated with "inconsistent_old_extra_data_*"
in the JSONField, because they are not overwritten.
For such audit log entries with "extra_data_json" marked as inconsistent,
we skip them in the migration. Because when we have discovered anomalies in a
previous run, there is no need to overwrite them again nesting the extra keys
we added to it.
**Testing**
We create a migration test case utilizing the property of bulk_create
that it doesn't call our modified save method.
We extend ZulipTestCase to support verifying console output at the test
case level. The implementation is crude but the use case should be rare
enough that we don't need it to be too elaborate.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Note that we use the DjangoJSONEncoder so that we have builtin support
for parsing Decimal and datetime.
During this intermediate state, the migration that creates
extra_data_json field has been run. We prepare for running the backfilling
migration that populates extra_data_json from extra_data.
This change implements double-write, which is important to keep the
state of extra data consistent. For most extra_data usage, this is
handled by the overriden `save` method on `AbstractRealmAuditLog`, where
we either generates extra_data_json using orjson.loads or
ast.literal_eval.
While backfilling ensures that old realm audit log entries have
extra_data_json populated, double-write ensures that any new entries
generated will also have extra_data_json set. So that we can then safely
rename extra_data_json to extra_data while ensuring the non-nullable
invariant.
For completeness, we additionally set RealmAuditLog.NEW_VALUE for
the USER_FULL_NAME_CHANGED event. This cannot be handled with the
overridden `save`.
This addresses: https://github.com/zulip/zulip/pull/23116#discussion_r1040277795
Note that extra_data_json at this point is not used yet. So the test
cases do not need to switch to testing extra_data_json. This is later
done after we rename extra_data_json to extra_data.
Double-write for the remote server audit logs is special, because we only
get the dumped bytes from an external source. Luckily, none of the
payload carries extra_data that is not generated using orjson.dumps for
audit logs of event types in SYNC_BILLING_EVENTS. This can be verified
by looking at:
`git grep -A 6 -E "event_type=.*(USER_CREATED|USER_ACTIVATED|USER_DEACTIVATED|USER_REACTIVATED|USER_ROLE_CHANGED|REALM_DEACTIVATED|REALM_REACTIVATED)"`
Therefore, we just need to populate extra_data_json doing an
orjson.loads call after a None-check.
Co-authored-by: Zixuan James Li <p359101898@gmail.com>
This adds support to accepting extra_data being dict from remote
servers' RealmAuditLog entries. So that it is forward-compatible with
servers that have migrated to use JSONField for RealmAuditLog just in
case. This prepares us for migrating zilencer's audit log models to use
JSONField for extra_data.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Part of splitting creating and editing scheduled messages.
Should be merged with final commit in series. Breaks tests.
Removes `scheduled_message_id` parameter from the create scheduled
message path.
Prep commit for splitting create/edit endpoint for scheduled
messages.
Because of `test-api` runs the tests in alphabetical order based on
the `operationId`, we need two scheduled messages in the test database.
The first for the curl example delete (delete-scheduled-message) and
the second for the curl example update (update-scheduled-message).
Prep commit for adding the scheduled-message endpoints to the API
documentation.
Adds a scheduled message for Iago in the test database so that it
can be deleted in the delete cURL example in the api-test suite.
This implements the core of the rewrite described in:
For the backend data model for UserPresence to one that supports much
more efficient queries and is more correct around handling of multiple
clients. The main loss of functionality is that we no longer track
which Client sent presence data (so we will no longer be able to say
using UserPresence "the user was last online on their desktop 15
minutes ago, but was online with their phone 3 minutes ago"). If we
consider that information important for the occasional investigation
query, we have can construct that answer data via UserActivity
already. It's not worth making Presence much more expensive/complex
to support it.
For slim_presence clients, this sends the same data format we sent
before, albeit with less complexity involved in constructing it. Note
that we at present will always send both last_active_time and
last_connected_time; we may revisit that in the future.
This commit doesn't include the finalizing migration, which drops the
UserPresenceOld table.
The way to deploy is to start the backfill migration with the server
down and then start the server *without* the user_presence queue worker,
to let the migration finish without having new data interfering with it.
Once the migration is done, the queue worker can be started, leading to
the presence data catching up to the current state as the queue worker
goes over the queued up events and updating the UserPresence table.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
Servers that had upgraded from a Zulip server version that did not yet
support the user_uuid field to one that did could end up with some
mobile devices having two push notifications registrations, one with a
user_id and the other with a user_uuid.
Fix this issue by sending both user_id and user_uuid, and clearing
This commit updates the pattern for dealing with tuples
returned by the delete() query.
The '(num_deleted, ignored) = ModelName.objects.filter().delete()'
pattern is preferred due to better readability.
We avoid the pattern '(num_deleted, _)' because Django uses _
for translation, which may lead to future bugs.
To avoid people calling "create_user_group" instead of
"check_add_user_group", we rename it to make its purpose clearer.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Since this function creates a new user group into the database,
it is more appropriate to have it not as a generic "lib" function
but as an "action".
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The Django convention is for __repr__ to include the type and __str__
to omit it. In fact its default __repr__ implementation for models
automatically adds a type prefix to __str__, which has resulted in the
type being duplicated:
>>> UserProfile.objects.first()
<UserProfile: <UserProfile: emailgateway@zulip.com <Realm: zulipinternal 1>>>
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The post-delete signal on AlertWord clears the realm cache; when it is
called repeatedly, this results in re-fetching the realm object O(n)
times, where n scales by number of users in the database.
Disconnect this cache-clearing signal before removing the AlertWord
entries, and reconnect it afterwards. This is not thread-safe, but
this section is single-threaded. It is also probably unnecessary to
re-connect the signal, as rest of `./manage.py populate_db` does not
delete AlertWord objects, but cleanliness dictates doing the
re-connection.
This drops the time to repeatedly run:
python3 ./manage.py populate_db --num-messages=0 --extra-users=1000
...from 47 seconds to 36 seconds.
This commits update the code to use user-level email_address_visibility
setting instead of realm-level to set or update the value of UserProfile.email
field and to send the emails to clients.
Major changes are -
- UserProfile.email field is set while creating the user according to
RealmUserDefault.email_address_visbility.
- UserProfile.email field is updated according to change in the setting.
- 'email_address_visibility' is added to person objects in user add event
and in avatar change event.
- client_gravatar can be different for different users when computing
avatar_url for messages and user objects since email available to clients
is dependent on user-level setting.
- For bots, email_address_visibility is set to EVERYONE while creating
them irrespective of realm-default value.
- Test changes are basically setting user-level setting instead of realm
setting and modifying the checks accordingly.
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These were useful as a transitional workaround to ignore type errors
that only show up with django-stubs, while avoiding errors about
unused type: ignore comments without django-stubs. Now that the
django-stubs transition is complete, switch to type: ignore comments
so that mypy will tell us if they become unnecessary. Many already
have.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit adds the OPTIONAL .realm attribute to Message
(and ArchivedMessage), with the server changes for making new Messages
have this set. Old Messages still have to be migrated to backfill this,
before it can be non-nullable.
Appropriate test changes to correctly set .realm for Messages the tests
manually create are included here as well.
Previously, we type the model fields with explicit type annotations
manually with the approximate types. This was because the lack of types
for Django.
django-stubs provides more specific types for all these fields that
incompatible with our previous approximate annotations. So now we can
remove the inline type annotations and rely on the types defined in the
stubs. This allows mypy to infer the types of the model fields for us.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Small follow-up to d86e4ac34d.
get_ makes it sound like it doesn't have side-effects, when these are
actually much like the django ORM .get_or_create function.
These are used for creating huddles and private messages (and some
UserPresence objects). It'd be really weird, and potentially create some
Messages that break our assumptions, for this to end up involving users
in multiple realms.
I believe currently this hasn't been happening, because when
this line runs, there are only users in "zulip" realm and system bots in
"zulipinternal" - but the query has been excluding bots already.
Still, this query should be explicit about grabbing users from a single
realm. This will also be helpful for the work adding the denormalized
Message.realm field - so that the realm of Message objects that get
manually created in generate_and_send_messages can be simply set to
"zulip" with confidence that it's correct.
In the presence of **kwargs, this is required by the Concatenate type
expected by default_never_cache_responses.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
In 5c49e4ba06, we neglected to include
the CSRF and caching decorators required for all API views in the new
remote_server_dispatch function.
I'm not sure why our automated tests didn't catch this, but this made
the remote server API endpoints nonfunctional in a production
environment.
This change incorporate should_rate_limit into rate_limit_user and
rate_limit_request_by_ip. Note a slight behavior change to other callers
to rate_limit_request_by_ip is made as we now check if the client is
eligible to be exempted from rate limiting now, which was previously
only done as a part of zerver.lib.rate_limiter.rate_limit.
Now we mock zerver.lib.rate_limiter.RateLimitedUser instead of
zerver.decorator.rate_limit_user in
zerver.tests.test_decorators.RateLimitTestCase, because rate_limit_user
will always be called but rate limit only happens the should_rate_limit
check passes;
we can continue to mock zerver.lib.rate_limiter.rate_limit_ip, because the
decorated view functions call rate_limit_request_by_ip that calls
rate_limit_ip when the should_rate_limit check passes.
We need to mock zerver.decorator.rate_limit_user for SkipRateLimitingTest
now because rate_limit has been removed. We don't need to mock
RateLimitedUser in this case because we are only verifying that
the skip_rate_limiting flag works.
To ensure coverage in add_logging_data, a new test case is added to use
a web_public_view (which decorates the view function with
add_logging_data) with a new flag to check_rate_limit_public_or_user_views.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This allows us to avoid importing from zilencer conditionally in
zerver.lib.rate_limiter, as we make rate limiting self-contained now.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This refactors the test case alongside, since normal views accessed by
remote server do not get rate limited by remote server anymore.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
`remote_server_path` allows us to get rid of all the `validate_entity`
calls in `zilencer.views` and remove all the `Union` type annotations
in the signatures of the authenticated view functions.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This allows us to separate the zilencer paths from other JSON paths,
with explicit type annotation expecting `RemoteZulipServer` as the
second parameter of the handler using
authenticated_remote_server_view.
The test case is also updated to remove a test for a situation that no
longer occurs anymore, since we don't perform subdomain checks on
remote servers.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Adds an API endpoint for accessing read receipts for other users, as
well as a modal UI for displaying that information.
Enables the previously merged privacy settings UI for managing whether
a user makes read receipts data available to other users.
Documentation is pending, and we'll likely want to link to the
documentation with help_settings_link once it is complete.
Fixes#3618.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
This commit modifies bulk_create_users to add the users to the
respective system groups. And due to this change, now bots in
development environment are also added to system groups.
Tests are changed accordingly as more UserGroupMembeship objects
are created.