Ths hardcoded documentation of which values are possible was destined
to end up inaccurate and out-of-date; and meanwhile, we do have a part
of the API that already has these data in machine-readable format.
This commit updates the 'notify_reaction_update' function to use
the generic 'event_recipient_ids_for_action_on_messages' function.
It helps to add hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, it has minimal user impact.
Earlier, submessage was not live-updated for users who joined
the stream after the message was sent.
This commit fixes that bug.
Also, now we use 'event_recipient_ids_for_action_on_messages'.
It helps to add hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, it has minimal user impact.
Earlier, we were sending 'delete_message' event to all active
subscribers of the stream.
We shouldn't send event to those users who don't have access
to the deleted message in a private stream with protected history.
This commit fixes that bug.
Also, now we use 'event_recipient_ids_for_action_on_messages'.
It helps to add hardening such that if the invariant "no usermessage
row corresponding to a message exists if the user loses access to the
message" is violated due to some bug, it has minimal user impact.
Added `result_` prefix to differentiate it from upcoming `message_ids`
parameter to the API request. Also, this is final `message_ids` that
we will fetch the messages for. So, a `result` prefix makes sense here.
Earlier, we used to store the key data related to realm exports
in RealmAuditLog. This commit adds a separate table to store
those data.
It includes the code to migrate the concerned existing data in
RealmAuditLog to RealmExport.
Fixes part of #31201.
This commit updates code to store the realm export stats in
json format instead of plain text.
This will help in storing the stats as JsonField in RealmExport table.
This prevents a deadlock between the thumbnailing worker and message
sending, as follows:
1. A user uploads an image, making Attachment and ImageAttachment
rows, as well as enqueuing a job in the thumbnailing queue.
2. Message sending starts a transaction, creates the Message row,
and calls `do_claim_attachments`, which edits the Attachment row
of the upload (implicitly locking it).
3. The thumbnailing worker starts a transaction, locks the
ImageAttachment row for its image, thumbnails it, and then
attempts to `select_for_update()` the message objects (joined to
the Attachments table) to find the ones which link to the
attachment in question. This query blocks, since "a locking
clause without a table list affects all tables used in the
statement"[^1] and the message-send request already has a write
lock on the Attachments row in question.
4. The message-send request attempts to re-fetch the ImageAttachment
row inside the transaction, which tries to pull a lock on it.
5. Deadlock, because the message-send request has the Attachment
lock, and waits for the ImageAttachment lock; the thumbnailing
worker has the ImageAttachment lock, and waits for the Attachment
lock.
We break this deadlock by limiting the
`update_message_rendered_content` `select_for_update` to only take
the lock on the Message table, and not also the Attachments table --
no changes will be made to the Attachments, so no lock is necessary
there. This allows the thumbnailing worker to successfully pull the
empty list of messages (since the message-send request has not
commits its transaction, and thus the Message row is not visible
yet), and release its ImageAttachment lock so that the message-send
request can proceed.
[^1]: https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE
This better simulates the Slack API, which is important, since some
integrations check this response and decide whether the Slack endpoint
is working based on what they receive.
These files are necessary for the protocol to verify that the file
upload was completed successfully. Rather than delete them, we update
their StorageClass if it is non-STANDARD.
Because the main indexes on end_time either don't include realm_id or
do include subgroup, passing an explicit subgroup=None for
single-realm queries to read CountStats that don't use the subgroups
feature greatly improves the query plans.
We create an unnamed user group with just the group creator as it's
member when trying to set the default. The pattern I've followed across
most of the acting_user additions is to just put the user declared
somewhere before the check_add_user_group and see if the test passes.
If it does not, then I'll look at what kind of user it needs to be set
to `acting_user`.
This commit does not add the logic of using this setting to actually
check the permission on the backend. That will be done in a later
commit.
Only owners can modify this setting, but we will add that logic in a
later commit in order to keep changes in this commit minimal.
Adding the setting breaks the frontend, since the frontend tries to find
a dropdown widget for the setting automatically. To avoid this, we've
added a small temporary if statement to `settings_org.js`.
Although, most lists where we insert this setting follow an unofficial
alphabetical order, `can_manage_all_groups` has been bunched together
with `can_create_groups` since keeping those similar settings together
would be nicer when checking any code related to creating/managing a
user group.
We will not remove `user_group_edit_policy` yet. That will be removed
once we have introduced a user group setting to manage edit permissions
to groups.
We might introduce a generic testing function similar to
do_test_changing_settings_by_owners_only later, but not right now, since
there is only 1 setting at the moment needing that test.
This commit does not add the logic of using this setting to actually
check the permission on the backend. That will be done in a later
commit.
Adding the setting breaks the frontend, since the frontend tries to find
a dropdown widget for the setting automatically. To avoid this, we've
added a small temporary if statement to `settings_org.js`.
In docker-zulip installs, /etc/zulip/zulip.conf,
/etc/zulip/zulip-secrets.conf, and /home/zulip/uploads are all
symlinks into the `/data` directory which is mounted as a Docker
Volume. By default, `tar` does not dereference symlinks, leading to
backups that are missing these critical pieces.
Add `-h` to the `tar` invocation, to follow symlinks, so backups in
Docker have all of their pieces. Since none of the contents of the
backup intentionally use symlinks, this is safe.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
The error response when a user group cannot be deactivated due
to it being used as a subgroup or for a setting includes details
about the supergroups, streams, user groups as well the settings
for which it is used.
This commit adds access_user_group_to_read_membership function
so that we can avoid calling get_user_group_by_id_in_realm with
"for_read=True" from views functions, which is better for security
since that function does not do any access checks.
Previously, if the user_group_edit_policy was set to allow
members or full members to manage the group, the user had
to be the direct member of the group being managed.
This commit updates the code to allow members of the subgroups
as well to manage the group as technically members of the
subgroups are member of the group.
This also improves the code to not fetch all the group members
to check this, and instead directly call is_user_in_group
which uses "exists" to check it.
This commit renames has_user_group_access function to
has_user_group_access_for_subgroup, since the function
is only used to check access for using a group as subgroup.
This commit refactors the code to check permission for
accessing user group in such a way that we can avoid
duplicate code in future when we will have different
settings controlling the permissions for editing group
details and settings, joining the group, adding others
to group, etc.
The use of the robust Address(...)-based address generation was added in
b945aa3443 but then this couple of
instances of the naive approach were added later.
Creates a URLRedirect for this help center article to go to the
new "Moving to Zulip" guide.
Updates the astro.config.mjs file for the changes to the help
center sidebar that have been made as part of the replacement
of this help center guide.
Fixes#31499.
Replaces links to "Getting your organization started with Zulip"
in onboarding emails and Welcome bot direct message for owners of
new organizations.
Revises text in those emails and messages to reflect the new
"Moving to Zulip" help center guide that is now used.
In order to only generate relative links for Zulip Cloud billing
specific gear menu options in relevant help center articles, we
pass down settings.CORPORATE_ENABLED to be set as a global variable
for zerver/lib/markdown/help_relative_links.py so that self-hosted
servers' help center documentation will not have these links.
It's nicer to have these indexes properly registered, rather than hidden
in RunSQL operations. Now that Django has had support for unique
functional indexes for a while, let's clean this up.
This lets slack conversions be done on development hosts, which have a
trailing :9991 on their EXTERNAL_HOST; otherwise, we generate fake
emails like `imported-slack-bot@host.name:9991` which fail to
validate.
The Content-Type, Content-Disposition, StorageClass, and general
metadata are not set according to our patterns by tusd; copy the file
to itself to update those properties.
Setting `ResponseContentDisposition=attachment` means that we override
the stored `ContentDisposition`, which includes a filename. This
means that using the "Download" link on servers with S3 storage
produced a file named the sanitized version we stored.
Explicitly build a `ContentDisposition` to tell S3 to return, which
includes both `attachment` as well as the filename (if we have it
locally).
Apparently, Outlook ignores height/width CSS rules, but does support
the attribute on the image element itself, so specify that instead.
I don't think there are likely to be image tag implementations that
don't support the attribute, given that's the only thing that works in
Outlook.
This test was written back when Django accepted view function names as
strings that might be wrong; that’s not possible in Django ≥ 1.10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For exporting full with consent:
* Earlier, a message advertising users to react with thumbs up
was sent and later used to determine the users who consented.
* Now, we no longer need to send such a message. This commit
updates the logic to use `allow_private_data_export` user-setting
to determine users who consented.
Fixes part of #31201.
This commit adds test coverage for the
'if user_profile.realm != message.get_realm():' block in
'set_visibility_policy_possible' function.
Currently, 'test_command_with_consented_message_id' adds the
test coverage for this code block. But we plan to remove that
test in the next commit as a part of removing 'react on consent
message' approach.
So, this commit explicitly adds code for the test coverage of
the concerned code block.
Removes the ignore_pull_requests URL parameter from the Travis CI
integration because its functionality is now fully covered by the
standard incoming webhook event filtering framework.
Fixes#30934.
This new property allows organization administrators to specify whether
users can modify the custom profile field value on their own account.
This property is configurable for individual fields.
By default, existing and newly created fields have this property set to
true, that is, they allow users to edit the value of the fields.
Fixes part of #22883.
Co-Authored-By: Ujjawal Modi <umodi2003@gmail.com>
This allows finer-grained access control and auditing. The links
generated also expire after one week, and the suggested configuration
is that the underlying data does as well.
Co-authored-by: Prakhar Pratyush <prakhar@zulip.com>
The `get_signed_upload_url` code is called for every S3 file serve
request, and is thus in the hot path. The boto3 client caching
optimization is thus potentially useful as a performance optimization.
This commit renames the 'send_event' function to
'send_event_rollback_unsafe' to reflect the fact that it doesn't
wait for the db transaction (within which it gets called, if any)
to commit and sends event irrespective of commit or rollback.
In most of the cases we don't want to send event in the case of
rollbacks, so the caller should be aware that calling the function
directly is rollback unsafe.
Adds a check for changing an existing guest user's role before
calling do_update_user in the case that a realm has a current
paid plan with manual license management.
In addition to checking for available licenses in the current
billing period when adding or inviting new non-guest users, for
manual billing, we also verify that the number of licenses set
for the next billing period will be enough when adding/inviting
new users.
Realms that are exempt from license number checks do not have
this restriction applied.
Admins are notified via group direct message when a user fails
to register due to this restriction.
If do_delete_messages (and friends) are called for a massive number of
messages, the giant list of message ids is passed to Postgres even
though chunk_size makes all but the first chunk_size of message ids
useless.
Aside of what's generally explained in the code comment, this is
motivated by the specific situation of import of Slack Connect channels.
These channels contain users who are "external collaborators" and
limited to a single channel in Slack. We don't have more sophisticated
handling of their import, which would map this concept 1-to-1 in Zulip -
but we create them as inactive dummy users, meaning they have to go
through signup before their account is usable.
The issue is that their imported UserProfile.role is set to Member and
when they register, the UserProfile gets reactivated with that role
unchanged. However, if e.g. the user is signing up after they received
an invitation from the admin, they should get the role that was
configured on the invite. In particular important if the user is meant
to still be "limited" and thus the admin invites them as a guest - they
definitely don't want the user to get a full Member account because of
this weird interaction between import and registration.
Currently, it handles two hook types: 'pre-create' (to verify that the
user is authenticated and the file size is within the limit) and
'pre-finish' (which creates an attachment row).
No secret is shared between Django and tusd for authentication of the
hooks endpoints, because none is necessary -- tusd forwards the
end-user's credentials, and the hook checks them like it would any
end-user request. An end-user gaining access to the endpoint would be
able to do no more harm than via tusd or the normal file upload API.
Regardless, the previous commit has restricted access to the endpoint
at the nginx layer.
Co-authored-by: Brijmohan Siyag <brijsiyag@gmail.com>
This commit renames "allow_deactivated" parameter in
"GET /user_groups" endpoint to "include_deactivated_groups", so
that we can have consistent naming here and for client capability
used for deciding whether to send deactivated groups in register
response and how to handle the related events.
This commit adds code to handle guests separately for group
based settings, where guest will only have permission if
that particular setting can be set to "role:everyone" group
even if the guest user is part of the group which is used
for that setting. This is to make sure that guests do not
get permissions for actions that we generally do not want
guests to have.
Currently the guests do not have permission for most of them
except for "Who can delete any message", where guest could
delete a message if the setting was set to a user defined
group with guest being its member. But this commit still
update the code to use the new function for all the settings
as we want to have a consistent pattern of how to check whether
a user has permission for group-based settings.
We may not always have trivial access to all of the bytes of the
uploaded file -- for instance, if the file was uploaded previously, or
by some other process. Downloading the entire image in order to check
its headers is an inefficient use of time and bandwidth.
Adjust `maybe_thumbnail` and dependencies to potentially take a
`pyvips.Source` which supports streaming data from S3 or disk. This
allows making the ImageAttachment row, if deemed appropriate, based on
only a few KB of data, and not the entire image.
This commit introduced 'creator' and 'date_created'
fields in user groups, allowing users to view who
created the groups and when.
Both fields can be null for groups without creator data.
We only allow updating name of a deactivated group, and not
allow updating description, members, subgroups and any setting
of a deactivated user group.
Deactivated user groups cannot be a a subgroup of any group
or used as a setting for a group.
This is important to make sure that we handle cases when there
are two parallel requests - one for using a group for a setting
and one for deactivating the same group. This makes sure that
atleast one of the above task fails.
As part of our todo in the code, we want to use the unique user IDs
instead of emails when processing the results of subscribing users to a
channel. These changes apply those changes and streamlines the use of IDs.
This param allows clients to specify how much presence history they want
to fetch. Previously, the server always returned 14 days of history.
With the recent migration of the presence API to the much more efficient
system relying on incremental fetches via the last_update_id param added
in #29999, we can now afford to provide much more history to clients
that request it - as all that historical data will only be fetched once.
There are three endpoints involved:
- `/register` - this is the main useful endpoint for this, used by API
clients to fetch initial data and register an events queue. Clients can
pass the `presence_history_limit_days` param here.
- `/users/me/presence` - this endpoint is currently used by clients to
update their presence status and fetch incremental data, making the new
functionality not particularly useful here. However, we still add the
new `history_limit_days` param here, in case in the future clients
transition to using this also for the initial presence data fetch.
- `/` - used when opening the webapp. Naturally, params aren't passed
here, so the server just assumes a value from
`settings.PRESENCE_HISTORY_LIMIT_DAYS_FOR_WEB_APP` and returns
information about this default value in page_params.
Earlier, the content of the "manage_preferences" block that includes
the unsubscribe_link, personal settings link, etc was missing in the
plaintext version of the custom emails.
This commit updates the logic to include the manage_preferences block
content in the plaintext version.
Previously, the emails sent to the remote servers had the
'unsubscribe link' only present in the 'List-Unsubscribe' header.
Not all email clients expose that header.
So, this commit adds the link in the footer too.
Fixes test that checks for error when invalid value is given for a property
in realm. Currently, only properties with type int are checked, leaving
properties having optional int type. This commit fixes that.
Now that we store the content-type in the database, use that value
(if we have it, since we did not backfill) when serving content back
to the client. This means the file backend has parity with the S3
backend.
Imported Slack bots currently do not have owners (#23145). Soften the
deactivation codepath to allow them to be successfully deactivated
despite this.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This comment looks like an ancient leftover from early days (moved here
in a test_bots extraction in 123b4c1877 in
2017). Whatever its history, this comment and test name don't make sense
anymore. The response here is an error, not a silent success.
Reorders audit log string methods to have the following pattern:
"event_type event_time (id): modified_object". And the event type
is the name for the AuditLogEventType enum.
Renamed event types below in the enum class to use channel instead of
stream.
Event types moved: STREAM_CREATED, STREAM_DEACTIVATED, STREAM_NAME_CHANGED
STREAM_REACTIVATED, STREAM_MESSAGE_RETENTION_DAYS_CHANGED
STREAM_PROPERTY_CHANGED, STREAM_GROUP_BASED_SETTING_CHANGED
Currently, we want to ask users if they would like to delete their
attachments after they have removed the attachments while editing. These
changes are preparatory changes on the backend to return a list of removed
attachments after the user has removed attachments while editing.
Fixes part of #25525.
As guest users are charged at a different rate on Zulip Cloud, it
makes sense to track these changes in our LicenseLedger table so
that we can have an accurate record of the use of licenses that
a realm has purchased.
All endpoints have been migrated to the typed_endpoint decorator,
therefore the has_request_variables decorator and the REQ function are
no longer needed and have been removed.
BeautifulSoup with formatter="html5" unnecessarily escapes many
characters with HTML5-specific entities that cannot be correctly
parsed by lxml during generation of email notifications.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We used 'savepoint=False' in #31169 which was prior to our discussion
in CZO to use 'durable=True' instead.
This commit makes changes to use 'durable=True' in the outermost
transaction.atomic block.
This commit removes the 'prev_rendered_content_version'
field from:
* the 'edit_history' object within message objects in the
API response of `GET /messages`, `GET /messages/{message_id}`
and `POST /zulip-outgoing-webhook`.
* the 'update_message' event type
as it is an internal server implementation detail not used
by any client.
Note: The field is still stored in the 'edit_history' column
of the 'Message' table as it will be helpful when making
major changes to the markup rendering process.
Thumbnails are usually enqueued in the worker when the image is
uploaded. However, for images which were uploaded before the
existence of the thumbnailing worker, and whose metadata was
backfilled (see previous commit) this leaves a permanent spinner,
since nothing triggers the thumbnail worker for them.
Enqueue a thumbnail worker for every spinner which we render into
Markdown. This ensures that _something_ is attempting to resolve the
spinner which the user sees. In the case of freshly-uploaded images
which are still in the queue, this results in a duplicate entry in the
thumbnailing queue -- this is harmless, since the worker determines
that all of the thumbnails we need have already been generated, and it
does no further work. However, in the case of historical uploads, it
properly kicks off the thumbnailing process and results in a
subsequent message update to include the freshly-generated thumbnail.
While specifically useful for backfilled uploads, this is also
generally a good safety step for a good user experience, as it also
prevents dropped events in the queue from unknown causes from leaving
perpetual spinners in the message feed.
Because `get_user_upload_previews` is potentially called twice for
every message with spinners (see 6f20c15ae9), we add an additional
flag to `get_user_upload_previews` to suppress a _second_ event from
being enqueued for every spinner generated.
We previously used the file extension to determine if we should
attempt to inline an image. After b42863be4b, we rely on the
existence of ImageAttachment rows to determine if something is an
image which can be viewed inline. This means that messages
containing files uploaded before that commit, when (re-)rendered, will
be judged as not having inline'able images.
Backfill all of the ImageAttachment rows for image-like file
extensions. We are careful to only download the bytes that we need in
the image headers, to minimize bandwidth from S3 in the event that the
S3 backend is in use. We do _not_ produce thumbnails for the images
during this migration; see the subsequent commit.
Because this migration will be backported to 9.x, it is marked as only
depending on the last migration in `9.x`, with a subsequent merge
migration into the tip of `main`.
Since each loop may add more than one file to the `storage_paths`
list, this may result in more than 1000 files being sent to
delete_message_attachments. Since the S3 backend only supports 1000
elements being deleted at once, we must partition the list into chunks
which are no more than 1000 elements long.
Fixes the URLRedirects for "help/about-streams-and-topics" and
"help/streams-and-topics" to go to "/help/introduction-to-topics"
since "help/channels-and-topics" has been redirected to that URL.
There are a few places where we want to set the max invites for a
realm to the default for a realm's plan type, so this creates a
helper function that can be used consistently to get that default
value.
This has the impact of making rebuilding the database in a Zulip
development environment, or initializing a new production database,
dramatically faster.
This was generated by merging the output of `manage.py makemigrations`
with an empty migration.
Tested using `tools/rebuild-test-database` before and after this
change, and comparing the output of `pg_dump -d zulip_test` using
Git's diff comparison algorithm. Differences in that SQL dump include:
- The actual generated table contents, due to timestamps and the like;
this is expected and unrelated to schema.
- Orders of fields within tables, which is not significant in SQL.
- IDs assigned to tables in the ContentType table, which is expected
and not a problem with how that Django table is designed.
- Names of generated indexes and constraints; modern Django seems to
abbreviate long names differently for these, and it's not obviously
possible to configure those used by the `db_index` property. If
necessarily, likely this can be converged via a migration filled
with `IF EXISTS` rename operations like the one done in
zerver/migrations/0246_message_date_sent_finalize_part2.py.
- Names of the ~3 sequences related to renamed tables:
usertopic/mutedtopic, botconfigdata/botuserconfigdata,
realmdomain/realmalias. Probably there's no action required here,
but we could do rename operations if desired.
Generated using manage.py squashmigrations, with minimal manual
surgery to replace the original migration.
This should in theory produce the exact same database state as
previously.
This has no functional changes, but it helps the squashmigrations tool
realize some squashing opportunities to not have models declared
before other models that they will gain a foreign key to.
Since it's the initial migration, this can't have any useful effect.
I'm pretty sure the backstory is we did a manual squash of migrations
during the process of open-sourcing Zulip, and incorrectly didn't
remove this code.
It's not possible to directly upgrade from pre-5.3 versions to main,
since they do not have any supported OSes in common. Thus, this
dependency on the confirmation model, which risks creating a circular
dependency if we squash migrations, can be removed.
Adds some validation for changing the realm's max invites via the
support view so that it is not set below the default max for the
realm's plan type, and so that if it's currently set to the default
max it's not reset to that same value.
This commit adds a new `group_size` field to the `DirectMessageGroup`
model, and backfills its value to each of the existing direct message
groups.
Fixes part of #25713
Earlier, we were using 'send_event' in 'do_change_is_billing_admin'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_update_outgoing_webhook_service' which can lead to a
situation, if any db operation is added after the 'send_event'
in future, where we enqueue events but the action function fails
at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_set_push_notifications_enabled_end_timestamp' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_set_realm_stream' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_set_realm_authentication_methods' which can lead to a situation
where we enqueue events but there's an error at a later stage in
the codepath using this function.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in do_set_realm_user_default_setting
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_regenerate_api_key'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_change_full_name'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
The Slack API when returning the emoji records, returns the record for
its thumbsup_all emoji with the url ending with .png, even though the
file is a gif.
For that reason, we have to make that code correct file extensions based
on the response content-type. Emojis are the smallest set of images to
download, so for simplicity of implementation, we remove the
parallelization of the downloads in favor of just processing them
serially.
Ideally this would besplit up into two commits, but it's hard to split
into self-contained, atomic chunks now that this segment of the
import/export system is generally kind of broken after thumbnailing
system changes.
1. 3rd party export converters don't make .original image files.
Insteadthey provide a single file, which the import should treat as if
it's .original.
2. 3rd party converters create all the records with is_animated=False.
That's an issue, because without setting that correctly on the
RealmEmoji objects, Zulip doesn't know that it should use the "still"
thumbnail when the emoji is being used in a user's status. Which leads
to incorrectly displaying the user status with the distracting
animation.
The export tool was only exporting the already-thumbnailed emoji file,
omitting the original one. Now we make sure to export the .original file
too, like we do for avatars, and make the import tool process it
directly, to thumbnail it directly and generate a still in the case of
animated emojis.
Otherwise, the imported realm wouldn't have the <emoji>.png.original
file that we generally expect to have accessible, and stills for
animated emojis were completely missing.