Now that it is trivial to rename a stream in the UI, And due
to the fact that the command has been broken for 3 years unnoticed,
it is unnecessary to maintain it anymore.
Fixes#22244.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The 0.1 second delay was sometimes not long enough to guarantee we hit
the async response path, resulting in a nondeterministic coverage
failure.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Commit 6fd1a558b7 (#21469) introduced an
await point where get_events_backend calls fetch_events in order to
switch threads. This opened the possibility that, in the window
between the connect_handler call in fetch_events and the old location
of this assignment in get_events_backend, an event could arrive,
causing ClientDescriptor.add_event to crash on missing
handler._request. Fix this by assigning handler._request earlier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Removes the ":" which have accidentally ended up in the "Get a link
to a specific topic" and "Get a link to a specific stream" headings.
Renames the "Via browser's address bar" tab to "Web" so that it
stays consistent with other help center articles.
Fixes part of #22147.
Since `HttpResponse` is an inaccurate representation of the
monkey-patched response object returned by the Django test client, we
replace it with `_MonkeyPatchedWSGIResponse` as `TestHttpResponse`.
This replaces `HttpResponse` in zerver/tests, analytics/tests, coporate/tests,
zerver/lib/test_classes.py, and zerver/lib/test_helpers.py with
`TestHttpResponse`. Several files in zerver/tests are excluded
from this substitution.
This commit is auto-generated by a script, with manual adjustments on certain
files squashed into it.
This is a part of the django-stubs refactorings.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We have now decided to not continue with the stream administrator
concept as we are changing the permissions model to be based on
user groups as per #19525. So, this commit updates the error message
to "Must be an organization administrator".
94457732c1 changed this from:
```py
event_name = payload.get("event_name", payload.get("object_kind")).tame(check_string)
```
...to:
```py
event_name = payload.get("event_name", payload["object_kind"]).tame(check_string)
```
Which causes a failure when `event_name` exists but `object_kind` does
not, since the default is evaluated first.
Switch to an `if` statement to clarify the fallbacks better.
This function is oblivious to the existence of ArchivedAttachment, which
is incorrect. A file can be removed if and only if it is not referenced
by any Messages or ArchivedMessages.
Using http://localhost:9991 is incorrect - e.g. messages sent with file
urls constructed trigger do_claim_attachments to be called with empty
list in potential_path_ids.
realm.host should be used in all these places, like in the other tests
in the file.
Adds a 2.1 release changelog entry for adding support for user
and stream IDs in search/narrow options. Also, adds a Changes
note in the narrow parameter in the OpenAPI `get-messages`
endpoint definition.
Both link to the api documentation for constructing a narrow,
where the 2.1 release update is already mentioned.
Fixes#9474.
Use `SimpleSuccess` response schema for all endpoints that were
already returning a success (200) response without any data beyond
the `response` and `msg` fields, which are standard for all
endpoint responses.
Prep commit for adding `ignored_parameters_unsupported` to
`json_success` responses.
Add none-checks, rename variables (to avoid redefinition of
the same variable with different types error), add necessary
type annotations.
This is a part of #18777.
Signed-off-by: Zixuan James Li <359101898@qq.com>
This commit changes the error message from "Invalid stream id"
to "Invalid stream ID" for cases where invalid stream IDs are
passed to API endpoints to make it consistent with other similar
error messages.
This applies a commonly-used, though non-RFC, header which suppresses
auto-replies to the message. There is a small chance that this will
result in bad filters thinking the messages *from Zulip* are
themselves auto-replies, but this seems a small risk.
Fixes: #13193.
Adds Changes notes for feature level 58 where support was added
for stream messages for the `/set-typing-status` endpoint
parameters.
Updates formatting for references to the `type`
parameter in the descriptions of other endpoint parameters.
Improves readability of and updates links in the endpoint's main
description.
Adds a changelog 2.0 entry for adding support for `stream_id`
parameter to the `mute-topic` endpoint. Also, adds Changes note
to the endpoint parameter description, and reorders/clarifies
that at least one (and only one) stream parameter must be provided
by the client and that the `string_id` parameter is preferred.
Fixes#11136.
Adds `create_web_public_stream_policy` to the `get-events` API
documentation for the `realm op:update` event.
Also, fixes changelog entries for feature levels 103 and 104,
which are related to the API documentation changes or fix an
error in references to the undocumented endpoint `PATCH /realm`.
We remove one call to get_occupied_streams to get occupied
streams before unsubscribing because we already know which
streams can become vacant, i.e. the one from which users are
being unsubscribed, and we can directly use the list of streams
from which users are being unsubscribed and get vacant streams
by checking which of these streams are not in get_occupied_streams
called after unsubscribing users.
This commit renames existing_subgroups variable to existing_direct_subgroup_ids
in add_subgroups_to_group_backend and remove_subgroups_from_group_backend functions
for better readability.
54b6a83412 fixed the typo introduced in 49ad188449, but that does
not clean up existing installs which had the file with the wrong name
already.
Remove the file with the typo'd name, so two jobs do not race, and fix
the typo in the comment.
Django caches some information on HttpRequest objects, including the
headers dict, under the assumption that requests won’t be reused.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Previously, we were marking messages of all the streams passed
to bulk_remove_subscriptions even if user was not subscribed
to some of them and those streams would ideally not have
any unread messages. This code was added in 766511e519.
This commit changes the code to only mark messages of actually
unsubscribed streams as read.
Conceptually, we're clearly intending to check whether the user we're
mutating is the last realm owner. The preexisting code was safe
because we've already checked that the target user is an owner, and
thus if we're the last owner, we're the target user.
This commit attempts to add the backend support by extending the
/json/bots/{bot_id}/ url support to accept the role field as a
parameter. This was previously already possible via
`/json/users/{user_id}`, so this change just simplifies client
implementation.
Fixes a few small inconsistences / mistakes in the OpenAPI
documentation related to error documentation. Does not change
the rendered API documentation, which is likely why these were
not noticed sooner.
In very large communities, computing page_params can be quite
expensive. Because we've moved the homepage for communities with web
public streams enabled to be the Zulip app, and it's common for
automation to frequently poll the homepage of a Zulip organization,
we'd like to keep those homepages cheap (as the login pages are).
We address this by prototyping something we may end up wanting to do
anyway -- having the web application do a `POST /register` API call in
order to fetch most page_params, and merging those with the mostly
webapp configuration page_params that we leave in the / response for
convenience.
This exact implementation is messy in a few ways:
* We rely on the assumption that ui_init.initialize_everything happens
before all code that needs to inspect the page_params properties we
are fetching via /register. This is likely mostly true, but nothing
in the implementation enforces it.
* The bundle of ~25 keys that are in page_params ideally would be
considered individually, with some moved to the /register API
response and perhaps others eliminated or namespaced inside a
webapp_settings object.
* It's weird to have the spectators network sequence different that
from logged-in users, and potentially a maintainability risk.
* We might be able to arrange that the initial `/` response be
cacheable, now that we're no longer embedding our metadata inside
it. We've made no effort to do that as of yet.
Despite those issues, this commit solves an immediate problem and will
give us helpful experience with a model closer to the one we'll want
in order to happily support a web client that can be run locally
against a production Zulip server's data.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This is necessary for the mobile/terminal clients to build spectator
support down the line. We'll also be using it for the web application,
in an upcoming commit.
Previously, we were masking the realm_description raw Markdown with
rendered Markdown, which was a type error.
When we switch to calling /register explicitly in a few commits, this
results in a bug, since the raw Markdown ends up taking priority.
Fix this by just using a different name for this different concept.
This error message is for a very precise situation -- the pattern not
having the desired format. We should say that, rather than a generic
"Malformed".
Currently an user can create multiple options with same text/label in
the select/"list of options" custom profile field type.
Fix this issue by extending the validator to throw an error if there
are duplicate choices in the "list of options" in custom profile
field.
Tweaked by tabbott to use a simpler check.
Fixes: #21880
cfcbf58cd1 rightly removed the use of `user_ids` in
`render_markdown`, which in turn makes it unnecessary in
`render_incoming_message`.
Remove the unnecessary parameter from `render_incoming_message`.
We leave the fetching of links outside of the lock, as they could take
seconds, which is an unreasonable amount of time to hold a lock on the
message row. This may result in unnecessary work, in the case that
the message was since edited, but the unnecessary work is preferable
to blocking other work on the message row for the duration.
This commit changes the code to always pass delivery_email
field in the user's own object in 'realm_users'.
This commit also fixes the events sent by notify_created_user.
In the "realm_user/add" event sent when creating the user,
the delivery_email field was set according to the access
for the created user itself as the created user was passed as
acting_user to format_user_row. But now since we have changed
the code to always allow the user themselves to have access
to the email, this bug was caught in tests and we fix the person
object in the event to have delivery_email field based on whether
the user receiving the event has access to email or not.
This commit adds code to copy the realm-level default of
settings while creating users through bulk_create_users.
We do not directly call 'copy_default_settings' as it
calls ".save()" but here we want to bulk_create the objects
for efficiency.
We also add the code to set realm-default of enter_sends as
True for the Zulip dev server as done in 754b547e8 and thus
we remove enter_sends argument from create_user_profile as
it is of no use now.
Adds `want_advertise_in_communities_directory` to the realm model
to track organizations that give permission to be listed on such
a site / directory on zulip.com.
Adds a checkbox to the organization profile admin for
organizations to give permission to be advertised in the
Zulip communities directory.
Adds a help center article about the Zulip communities directory
and uses a shared intro documentation file to create sections in
the articles on creating an organization profile and moderating
open organizations.
Co-authored-by: Alya Abbott <alya@zulip.com>
We want to avoid logging this kind of potentially sensitive information.
Instead, it's more useful to log ids of the matching accounts on
different subdomains.
Previously, this command would reliably fail:
```
tools/test-backend --skip-provision-check --parallel=3
zerver.tests.test_email_log.EmailLogTest.test_forward_address_details
zerver.tests.test_email_log.EmailLogTest.test_generate_and_clear_email_log
zerver.tests.test_example.TestDevelopmentEmailsLog
```
and now it reliably succeeds. :-)
After hours of fiddling/googling/hair-tearing, I found that
mocking-away Django Connection.send_messages() was the best:
- We're testing Zulip and not Django.
- Mocking at this lower level exercises more of our code.
- EmailLogBackEnd._do_send_messages() helper method added to simplify mocking.
Fixes#21925.
According to the documentation: “Pika does not have any notion of
threading in the code. If you want to use Pika with threading, make
sure you have a Pika connection per thread, created in that thread. It
is not safe to share one Pika connection across threads, with one
exception: you may call the connection method add_callback_threadsafe
from another thread to schedule a callback within an active pika
connection.”
https://pika.readthedocs.io/en/stable/faq.html
This also means that synchronous Django code running in Tornado will
use its own synchronous SimpleQueueClient rather than sharing the
asynchronous TornadoQueueClient, which is unfortunate but necessary as
they’re about to be on different threads.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We previously forked tornado.autoreload to work around a problem where
it would crash if you introduce a syntax error and not recover if you
fix it (https://github.com/tornadoweb/tornado/issues/2398).
A much more maintainable workaround for that issue, at least in
current Tornado, is to use tornado.autoreload as the main module.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This matches the metadata that we store in the database, and means
that the S3 metadatata invariant of always having a `user_profile_id`
in the metadata.
This does not fix existing imports, which may still have missing
`user_profile_id`s.
The changes in the last few commits changed the semantics of the
organization default language to no longer be the primary source of
information for a user's language when creating a new account.
Here, we change the settings UI and /help/ documentation to reflect
this.
This commit reads the browser locale during user registration, and
sets it as default language of user if it is supported by Zulip.
Otherwise, it is set to realm's default language.
This commit adds get_browser_language_code function
which returns None if there is no Accept-language
header in the request or Accept-languge header contains
only unsupported languages or all languages (meaning
header having value of '*'). Otherwise it returns the
language with highest weight/quality-value.
slack_incoming webhook previously used has_request_variables to
extract payload from HttpRequest object first, before trying to
access HttpRequest.body again in view.py. This caused an error
when one sends a request without payload - it is forbidden to
read from request data stream twice.
Instead of relying on has_request_variables, this PR extracts
payload depending on content type in view.py directly to avoid
reading request data stream twice.
Fixes#19056.
To provide a smoother experience of accessing a web public stream,
we don't ask user to login unless user directly requests a
`/login` URL.
Fixes#21690.
This refactored `get_mentioned_user_group_name` from
`zerver/lib/email_notifications.py` to
`zerver/lib/notification_data.py` just after
`get_user_group_mentions_data` to indicate the logical
similarity between them.
Signed-off-by: Zixuan James Li <359101898@qq.com>
This commit renames get_user_group_direct_members function to
get_user_group_direct_member_ids as it returns a list of ids
and to avoid it being parallel to get_recursive_group_members,
which returns a QuerySet.
The default of a Stream is to be public - having
history_public_to_subscribers default to False is inconsistent with
that. The defaults on the model should generally be consistent.
history_public_to_subscribers wasn't explicitly set when creating
streams via build_stream, thus relying on the model's default of False.
This lead to public streams being created with that value set to False,
which doesn't make sense.
We can solve this by inferring the correct value based on invite_only in
the build_stream funtion itself - rather than needing to add a flag
argument to it.
This commit also includes a migration to fix public stream with the
wrong history_public_to_subscribers value.
Fixes#21784.
As a consequence:
• Bump minimum supported Python version to 3.8.
• Move Vagrant environment to Ubuntu 20.04, which has Python 3.8.
• Move CI frontend tests to Ubuntu 20.04.
• Move production build test to Ubuntu 20.04.
• Move 3.4 upgrade test to Ubuntu 20.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adds a drop-down menu for updating the organization type in the
`organization_profile_admin` page. Implements front end for
this setting to work / update like other organization profile,
notification and permissions settings.
One special note about this dropdown is that the listed options
should change once an organization has successfully set a type
other than 'unspecified' in the database. To accomplish this
the initial settings overlay build checks the realm_org_type
value in the page_params to select the correct options list,
and when the dropdown value is reset, either for update events
or for discarding changes, the page_params value is again used
to check for whether the 'unspecified' value should be present
as an option in the dropdown menu.
Adds basic node test for the `server_events_dispatch`.
Also adds a new help center documentation article for this
organization setting that is linked to in the UI.
Fixes#21692.
`org_type` already exists as a field in the Realm model and is
used when organizations are created / updated in Zulip Cloud,
via the `/analytics/support` view.
Extends the `PATCH /realm` view to be able update `org_type` as
other realm / organization settings are updated, but using the
special log / action that was created for the analytics view.
Adds a field to the `realm op: update` / `realm op: update_dict`
events, which also means an event is now sent when and if the
`org_type` is updated via the analytics view. This is similar
to how updates to an organization's `plan_type` trigger events.
Adds `realm_org_type` as a realm setting fetched from the
`POST /register` endpoint.
This commit adds 'GET /user_groups/{user_group_id}/members'
endpoint to get members of a user group. "direct_member_only"
parameter can be passed as True to the endpoint to get only
direct members of the user group and not the members of
subgroup.
This commit adds 'GET /user_groups/{id}/members/{id}' endpoint to check
whether a user is member of a group.
This commit also adds for_read parameter to access_user_group_by_id,
which if passed as True will provide access to read user group even
if it a system group or if non-admin acting user is not part of the
group.
This commits adds is_user_in_group function
which can be used to check whether a user
is part of a user group or not. It also
supports recursive parameter for including
the members of all the subgroups as well.
This commit also adds 'subgroups' field to the user_group present
in the event sent on creating a user group. We do not allow passing
the subgroups while creating a user group as of this commit, but added
the field in the event object to pass tests.
This commit changes the invite API to accept invitation
expiration time in minutes since we are going to add a
custom option in further commits which would allow a user
to set expiration time in minutes, hours and weeks as well.
When `update_message` events were updated to have a consistent
format for both normal message updates/edits and special
rendering preview updates, the logic used in the tornado event
queue processor to identify the special events for sending
notifications no longer applied.
Updates that logic to use the `rendering_only` flag (if present)
that was added to the `update_message` event format to identify
if the event processor should potentially send notifications to
users.
For upgrade compatibility, if `rendering_only` flag is not present,
uses previous event structure and checks for the absence of the
`user_id` property, which indicated the special rendering preview
updates.
Fixes#16022.
Added a setting to the bottom of Settings > Display settings > Theme section
to display the reacting users on a message when numnber of reactions are
small.
This is a preparatory commit for #20980.
It doesn't make sense to run sync_ldap_user_data if user_profiles list
is empty. Otherwise this misleading exception gets raised:
```
raise Exception(
"LDAP sync would have deactivated all users. This is most likely due "
"to a misconfiguration of LDAP settings. Rolling back...\n"
"Use the --force option if the mass deactivation is intended."
)
```
With some work by tabbott to manage the type of user_profiles and
provide a special error message for the empty server case.
The relevant function is waiting to be merged in #21299 - but we have
already used it on Zulip Cloud, creating RealmAuditLog entries with the
number 107 and thus should reserve it before another PR takes
it for another purpose, creating confusion in the logs.
Based on an audit, this closes out the last core instances in which
acting_user was not being passed explicitly when creating
RealmAuditLog instances.
There are some outstanding issues in the billing system, which we plan
to extract as a separate issue.
Fixes#14808.
The only purpose of this seems to be to not have to reset the cache;
fae59502ab added it without any explanation for why it is necessary.
Remove it, and explicitly flush the cache in the one place where it is
necessary.
This cache was added in da33b72848 to serve as a replacement for the
durable database cache, in development; the previous commit has
switched that to be the non-durable memcached backend.
The special-case for "in-memory" in development is mostly-unnecessary
in contrast to memcached -- `./tools/run-dev.py` flushes memcached on
every startup. This differs in behaviour slightly, in that if the
codepath is changed and `run-dev` restarts Django, the cache is not
cleared. This seems an unlikely occurrence, however, and the code
cleanup from its removal is worth it.
The choice to cache these in the database dates back to c93f1d4eda,
with the comment added in da33b72848 while working around the
durability of the "database" cache in local development.
The values were stored in a durable cache, as they needed to be
ensured to persist between when they were inserted in
`get_link_embed_data` and when they were used in
`render_incoming_message` via `link_embed_data_from_cache`.
However, database accesses are not fast compared to memcached, and we
wish to avoid the overhead of the database connection from the
`embed_links` worker. Specifically, making the connection may not be
thread-safe -- and in low-memory (and Docker) configurations, all
workers run as separate threads in a single process. This can lead to
stalled database connections in `embed_links` workers, and failed
previews.
Since the previous commit made the durability of the cache no longer
necessary, this will have minimal effect; at worst, posting the same
URL twice, on either side of an upgrade, will result in two preview
fetches of it.
The `get_link_embed_data` / `link_embed_data_from_cache` pair as
introduced in c93f1d4eda uses the cache
as a temporary store inside of the `embed_links` worker; this means
that it must be durable storage, or the worker will stall and re-fetch
the same links to preview them.
Switch to plumbing through the fetched URL embed data as an parameter
to the Markdown evaluation which uses them, rather than using the
cache as an intermediary. This frees up the cache to be merely a
non-durable cache.
As a side-effect, this removes get_cache_with_key, and
link_embed_data_from_cache which was its only callsite.
76deb30312 changed this to not just be the URL, but rather a
prefixed hash of the URL, but failed to update this location which
wrote to it. This meant that this pre-population step was writing to
the wrong keys in the durable cache, and thus ineffective.
Then, da33b72848 switched the cache to be in-memory, making this
write to the wrong keys in an in-process memory store. There is no
way to pre-fill this sort of cache, except at server start-up.
Finally, and most fundamentally, 8c0c9ca7a4 then disabled
`inline_url_embed_preview` by default, making the code entirely moot.
Remove the triply-unnecessary code.
`cachify` is essentially caching the return value of a function using only
the non-keyword-only arguments as the key.
The use case of the function in the backend can be sufficiently covered by
`functools.lru_cache` as an unbound cache. There is no signficant difference
apart from `cachify` overlooking keyword-only arguments, and
`functools.lru_cache` being conveniently typed.
Signed-off-by: Zixuan James Li <359101898@qq.com>
This demonstrates a way to resolve the long-standing issue
of typing higher-order identity functions without using
`cast` and in a type-safe manner for decorators in `cache.py`.
Signed-off-by: Zixuan James Li <359101898@qq.com>
do_deactivate_user can't be run in an atomic block due to concerns
around revoking session in a transaction. See
62ba8e455d for more details.
Without the change in this commit, the process of deactivating a user
via SCIM is broken.
Adds and updates changelog documentation for
`POST /users/me/status` feature level 86 addition
of new emoji parameters.
Makes description text for emoji `reaction_type` consistent
throughout API documentation and also adds better description
of the `unicode_emoji` namespace.
Redirects emoji field links in `user_status` event to go to
the parameters in `/update-status` endpoint, which was not a
documented endpoint when the event documentation was created.
It's natural that someone might try a wrong password 5 times, and then
go through a successful password reset; forcing such users to wait
half an hour before typing in the password they just changed the
account to seems unnecessarily punitive.
Clear the rate-limit upon successful password change.
Removes `token_kind` parameter being passed to
`remove_apns_device_token` and `remove_android_reg_id` code
paths / endpoints. Possibly missed in a refactor of this
function as the tests for adding these tokens do not pass
a `token_kind` parameter.
Removes `zulip_org_id` and `zulip_org_kay` from code testing
`deactivate_remote_server`. These parameters are passed when
a remote server is added, so possibly a copy and paste error
when these tests were written / last refactored.
`update_realm_custom_profile_field` does not take `field_type`
as a parameter, so this removes it from any related tests.
Possibly these test parameters were missed in a refactor of this
endpoint / code.
`service_interface` is not a parameter of `add_bot_backend`, but
`interface_type` is, and that has the same default value as what
was being provided by the test, so updated for the parameter name
change, which was possibly missed in a previous code refactor.
`update_default_stream_group_info` was being passed `op` and
`group_name` in various tests, which are not implemented as
parameters for that endpoint / code path. So this removes those
from the existing tests. This is not a documented API endpoint,
so perhaps these were just overlooked when these tests were
written / last refactored.
If an API request specified a `client` parameter, we were
already prioritizing that value over parsing the UserAgent.
In order to have these parameters logged in the `RequestNotes`
as processed parameters instead of ignored parameters, we add
the `has_request_variables` decorator to `parse_client` and
then process the potential `client` parameter through the REQ
framework.
Co-authored by: Tim Abbott <tabbott@zulip.com>
We remove the StackOverflow link because it is now so dated as to be
irrelevant -- it does not use `self.ident`, and cargo-cults the return
value of PyThreadState_SetAsyncExc.
As noted in the docstring for this function, the timeout is
best-effort only -- if the thread is blocked in a syscall, it will not
service the exception until it returns. It can also choose to catch
and ignore the TimeoutExpired; in either case it will still be running
even after the `timeout()` function returns.
Raising a vare TimeoutExpired it still somewhat accurate, but obscures
that the backend thread may still be running along merrily. Notice
such cases, and log a warning about them.
Having just thrown an exception into the thread, it is often useful to
know _what_ was the slow code that we interrupted. Raising a bare
TimeoutExpired here obscures that information, as any `exc_info` will
end there.
Examine the thread for any exception information, and use that to
re-raise. This exception information is not guaranteed to exist -- if
the thread didn't respond to the exception in time, or caught it, for
instance.
The quote in question originates in python/cpython@b8b6d0c2c6, when
the code was added. However, the code stopped having that comment,
and was no longer able to return anything but 1 or 0, starting in
python/cpython@4643c2fda1 -- Python 2.5.
Remove the block.
Reformats two events (`reaction op: add` and `reaction op:remove`)
to follow the general format of events in the OpenAPI that are
returned by the `/get-events` endpoint.
Removes unneeded reference to `EmojiBase` schema in `user_status`
return value for the `/register-queue` endpoint. Also, clarifies
the text about the `user_status` object and fields being returned.
Adds a non-endpoint specific page to the API documentation about
organization-level roles and permissions for users in order to
highlight important and useful information for clients and API
users.
Also, adds links to new documentation page in related areas
of the API documentation.
We were not setting the `historical` flag correctly for
messages fetched via `json_fetch_raw_message` when used didn't
have any UserMessage.
Extended relevant tests to fetch check message flags too.
Instead of using request.POST to get any potential `sender`
parameters for `send_message_backend`, moves it to the REQ
framework parameters as `req_sender`.
Also, updates `create_mirrored_message_users` to take specific
parameters instead of an HTTP request parameter, which then
accessed parameters via request.POST. And updates existing tests
for those changes.
4815f6e28b tried to de-duplicate bot
email addresses, but instead caused duplicates to crash:
```
Traceback (most recent call last):
File "./manage.py", line 157, in <module>
execute_from_command_line(sys.argv)
File "./manage.py", line 122, in execute_from_command_line
utility.execute()
File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/srv/zulip-venv-cache/56ac6adf406011a100282dd526d03537be84d23e/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/management/commands/convert_slack_data.py", line 59, in handle
do_convert_data(path, output_dir, token, threads=num_threads)
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 1320, in do_convert_data
) = slack_workspace_to_realm(
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 141, in slack_workspace_to_realm
) = users_to_zerver_userprofile(slack_data_dir, user_list, realm_id, int(NOW), domain_name)
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 248, in users_to_zerver_userprofile
email = get_user_email(user, domain_name)
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 406, in get_user_email
return SlackBotEmail.get_email(user["profile"], domain_name)
File "/home/zulip/deployments/2022-03-16-22-25-42/zerver/data_import/slack.py", line 85, in get_email
email_prefix += cls.duplicate_email_count[email]
TypeError: can only concatenate str (not "int") to str
```
Fix the stringification, make it case-insensitive, append with a dash
for readability, and add tests for all of the above.