This test was written back when Django accepted view function names as
strings that might be wrong; that’s not possible in Django ≥ 1.10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For exporting full with consent:
* Earlier, a message advertising users to react with thumbs up
was sent and later used to determine the users who consented.
* Now, we no longer need to send such a message. This commit
updates the logic to use `allow_private_data_export` user-setting
to determine users who consented.
Fixes part of #31201.
This commit adds test coverage for the
'if user_profile.realm != message.get_realm():' block in
'set_visibility_policy_possible' function.
Currently, 'test_command_with_consented_message_id' adds the
test coverage for this code block. But we plan to remove that
test in the next commit as a part of removing 'react on consent
message' approach.
So, this commit explicitly adds code for the test coverage of
the concerned code block.
Removes the ignore_pull_requests URL parameter from the Travis CI
integration because its functionality is now fully covered by the
standard incoming webhook event filtering framework.
Fixes#30934.
This new property allows organization administrators to specify whether
users can modify the custom profile field value on their own account.
This property is configurable for individual fields.
By default, existing and newly created fields have this property set to
true, that is, they allow users to edit the value of the fields.
Fixes part of #22883.
Co-Authored-By: Ujjawal Modi <umodi2003@gmail.com>
This allows finer-grained access control and auditing. The links
generated also expire after one week, and the suggested configuration
is that the underlying data does as well.
Co-authored-by: Prakhar Pratyush <prakhar@zulip.com>
The `get_signed_upload_url` code is called for every S3 file serve
request, and is thus in the hot path. The boto3 client caching
optimization is thus potentially useful as a performance optimization.
This commit renames the 'send_event' function to
'send_event_rollback_unsafe' to reflect the fact that it doesn't
wait for the db transaction (within which it gets called, if any)
to commit and sends event irrespective of commit or rollback.
In most of the cases we don't want to send event in the case of
rollbacks, so the caller should be aware that calling the function
directly is rollback unsafe.
Adds a check for changing an existing guest user's role before
calling do_update_user in the case that a realm has a current
paid plan with manual license management.
In addition to checking for available licenses in the current
billing period when adding or inviting new non-guest users, for
manual billing, we also verify that the number of licenses set
for the next billing period will be enough when adding/inviting
new users.
Realms that are exempt from license number checks do not have
this restriction applied.
Admins are notified via group direct message when a user fails
to register due to this restriction.
If do_delete_messages (and friends) are called for a massive number of
messages, the giant list of message ids is passed to Postgres even
though chunk_size makes all but the first chunk_size of message ids
useless.
Aside of what's generally explained in the code comment, this is
motivated by the specific situation of import of Slack Connect channels.
These channels contain users who are "external collaborators" and
limited to a single channel in Slack. We don't have more sophisticated
handling of their import, which would map this concept 1-to-1 in Zulip -
but we create them as inactive dummy users, meaning they have to go
through signup before their account is usable.
The issue is that their imported UserProfile.role is set to Member and
when they register, the UserProfile gets reactivated with that role
unchanged. However, if e.g. the user is signing up after they received
an invitation from the admin, they should get the role that was
configured on the invite. In particular important if the user is meant
to still be "limited" and thus the admin invites them as a guest - they
definitely don't want the user to get a full Member account because of
this weird interaction between import and registration.
Currently, it handles two hook types: 'pre-create' (to verify that the
user is authenticated and the file size is within the limit) and
'pre-finish' (which creates an attachment row).
No secret is shared between Django and tusd for authentication of the
hooks endpoints, because none is necessary -- tusd forwards the
end-user's credentials, and the hook checks them like it would any
end-user request. An end-user gaining access to the endpoint would be
able to do no more harm than via tusd or the normal file upload API.
Regardless, the previous commit has restricted access to the endpoint
at the nginx layer.
Co-authored-by: Brijmohan Siyag <brijsiyag@gmail.com>
This commit renames "allow_deactivated" parameter in
"GET /user_groups" endpoint to "include_deactivated_groups", so
that we can have consistent naming here and for client capability
used for deciding whether to send deactivated groups in register
response and how to handle the related events.
This commit adds code to handle guests separately for group
based settings, where guest will only have permission if
that particular setting can be set to "role:everyone" group
even if the guest user is part of the group which is used
for that setting. This is to make sure that guests do not
get permissions for actions that we generally do not want
guests to have.
Currently the guests do not have permission for most of them
except for "Who can delete any message", where guest could
delete a message if the setting was set to a user defined
group with guest being its member. But this commit still
update the code to use the new function for all the settings
as we want to have a consistent pattern of how to check whether
a user has permission for group-based settings.
We may not always have trivial access to all of the bytes of the
uploaded file -- for instance, if the file was uploaded previously, or
by some other process. Downloading the entire image in order to check
its headers is an inefficient use of time and bandwidth.
Adjust `maybe_thumbnail` and dependencies to potentially take a
`pyvips.Source` which supports streaming data from S3 or disk. This
allows making the ImageAttachment row, if deemed appropriate, based on
only a few KB of data, and not the entire image.
This commit introduced 'creator' and 'date_created'
fields in user groups, allowing users to view who
created the groups and when.
Both fields can be null for groups without creator data.
We only allow updating name of a deactivated group, and not
allow updating description, members, subgroups and any setting
of a deactivated user group.
Deactivated user groups cannot be a a subgroup of any group
or used as a setting for a group.
This is important to make sure that we handle cases when there
are two parallel requests - one for using a group for a setting
and one for deactivating the same group. This makes sure that
atleast one of the above task fails.
As part of our todo in the code, we want to use the unique user IDs
instead of emails when processing the results of subscribing users to a
channel. These changes apply those changes and streamlines the use of IDs.
This param allows clients to specify how much presence history they want
to fetch. Previously, the server always returned 14 days of history.
With the recent migration of the presence API to the much more efficient
system relying on incremental fetches via the last_update_id param added
in #29999, we can now afford to provide much more history to clients
that request it - as all that historical data will only be fetched once.
There are three endpoints involved:
- `/register` - this is the main useful endpoint for this, used by API
clients to fetch initial data and register an events queue. Clients can
pass the `presence_history_limit_days` param here.
- `/users/me/presence` - this endpoint is currently used by clients to
update their presence status and fetch incremental data, making the new
functionality not particularly useful here. However, we still add the
new `history_limit_days` param here, in case in the future clients
transition to using this also for the initial presence data fetch.
- `/` - used when opening the webapp. Naturally, params aren't passed
here, so the server just assumes a value from
`settings.PRESENCE_HISTORY_LIMIT_DAYS_FOR_WEB_APP` and returns
information about this default value in page_params.
Earlier, the content of the "manage_preferences" block that includes
the unsubscribe_link, personal settings link, etc was missing in the
plaintext version of the custom emails.
This commit updates the logic to include the manage_preferences block
content in the plaintext version.
Previously, the emails sent to the remote servers had the
'unsubscribe link' only present in the 'List-Unsubscribe' header.
Not all email clients expose that header.
So, this commit adds the link in the footer too.
Fixes test that checks for error when invalid value is given for a property
in realm. Currently, only properties with type int are checked, leaving
properties having optional int type. This commit fixes that.
Now that we store the content-type in the database, use that value
(if we have it, since we did not backfill) when serving content back
to the client. This means the file backend has parity with the S3
backend.
Imported Slack bots currently do not have owners (#23145). Soften the
deactivation codepath to allow them to be successfully deactivated
despite this.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This comment looks like an ancient leftover from early days (moved here
in a test_bots extraction in 123b4c1877 in
2017). Whatever its history, this comment and test name don't make sense
anymore. The response here is an error, not a silent success.
Reorders audit log string methods to have the following pattern:
"event_type event_time (id): modified_object". And the event type
is the name for the AuditLogEventType enum.
Renamed event types below in the enum class to use channel instead of
stream.
Event types moved: STREAM_CREATED, STREAM_DEACTIVATED, STREAM_NAME_CHANGED
STREAM_REACTIVATED, STREAM_MESSAGE_RETENTION_DAYS_CHANGED
STREAM_PROPERTY_CHANGED, STREAM_GROUP_BASED_SETTING_CHANGED
Currently, we want to ask users if they would like to delete their
attachments after they have removed the attachments while editing. These
changes are preparatory changes on the backend to return a list of removed
attachments after the user has removed attachments while editing.
Fixes part of #25525.
As guest users are charged at a different rate on Zulip Cloud, it
makes sense to track these changes in our LicenseLedger table so
that we can have an accurate record of the use of licenses that
a realm has purchased.
All endpoints have been migrated to the typed_endpoint decorator,
therefore the has_request_variables decorator and the REQ function are
no longer needed and have been removed.
BeautifulSoup with formatter="html5" unnecessarily escapes many
characters with HTML5-specific entities that cannot be correctly
parsed by lxml during generation of email notifications.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We used 'savepoint=False' in #31169 which was prior to our discussion
in CZO to use 'durable=True' instead.
This commit makes changes to use 'durable=True' in the outermost
transaction.atomic block.
This commit removes the 'prev_rendered_content_version'
field from:
* the 'edit_history' object within message objects in the
API response of `GET /messages`, `GET /messages/{message_id}`
and `POST /zulip-outgoing-webhook`.
* the 'update_message' event type
as it is an internal server implementation detail not used
by any client.
Note: The field is still stored in the 'edit_history' column
of the 'Message' table as it will be helpful when making
major changes to the markup rendering process.
Thumbnails are usually enqueued in the worker when the image is
uploaded. However, for images which were uploaded before the
existence of the thumbnailing worker, and whose metadata was
backfilled (see previous commit) this leaves a permanent spinner,
since nothing triggers the thumbnail worker for them.
Enqueue a thumbnail worker for every spinner which we render into
Markdown. This ensures that _something_ is attempting to resolve the
spinner which the user sees. In the case of freshly-uploaded images
which are still in the queue, this results in a duplicate entry in the
thumbnailing queue -- this is harmless, since the worker determines
that all of the thumbnails we need have already been generated, and it
does no further work. However, in the case of historical uploads, it
properly kicks off the thumbnailing process and results in a
subsequent message update to include the freshly-generated thumbnail.
While specifically useful for backfilled uploads, this is also
generally a good safety step for a good user experience, as it also
prevents dropped events in the queue from unknown causes from leaving
perpetual spinners in the message feed.
Because `get_user_upload_previews` is potentially called twice for
every message with spinners (see 6f20c15ae9), we add an additional
flag to `get_user_upload_previews` to suppress a _second_ event from
being enqueued for every spinner generated.
We previously used the file extension to determine if we should
attempt to inline an image. After b42863be4b, we rely on the
existence of ImageAttachment rows to determine if something is an
image which can be viewed inline. This means that messages
containing files uploaded before that commit, when (re-)rendered, will
be judged as not having inline'able images.
Backfill all of the ImageAttachment rows for image-like file
extensions. We are careful to only download the bytes that we need in
the image headers, to minimize bandwidth from S3 in the event that the
S3 backend is in use. We do _not_ produce thumbnails for the images
during this migration; see the subsequent commit.
Because this migration will be backported to 9.x, it is marked as only
depending on the last migration in `9.x`, with a subsequent merge
migration into the tip of `main`.
Since each loop may add more than one file to the `storage_paths`
list, this may result in more than 1000 files being sent to
delete_message_attachments. Since the S3 backend only supports 1000
elements being deleted at once, we must partition the list into chunks
which are no more than 1000 elements long.
Fixes the URLRedirects for "help/about-streams-and-topics" and
"help/streams-and-topics" to go to "/help/introduction-to-topics"
since "help/channels-and-topics" has been redirected to that URL.
There are a few places where we want to set the max invites for a
realm to the default for a realm's plan type, so this creates a
helper function that can be used consistently to get that default
value.
This has the impact of making rebuilding the database in a Zulip
development environment, or initializing a new production database,
dramatically faster.
This was generated by merging the output of `manage.py makemigrations`
with an empty migration.
Tested using `tools/rebuild-test-database` before and after this
change, and comparing the output of `pg_dump -d zulip_test` using
Git's diff comparison algorithm. Differences in that SQL dump include:
- The actual generated table contents, due to timestamps and the like;
this is expected and unrelated to schema.
- Orders of fields within tables, which is not significant in SQL.
- IDs assigned to tables in the ContentType table, which is expected
and not a problem with how that Django table is designed.
- Names of generated indexes and constraints; modern Django seems to
abbreviate long names differently for these, and it's not obviously
possible to configure those used by the `db_index` property. If
necessarily, likely this can be converged via a migration filled
with `IF EXISTS` rename operations like the one done in
zerver/migrations/0246_message_date_sent_finalize_part2.py.
- Names of the ~3 sequences related to renamed tables:
usertopic/mutedtopic, botconfigdata/botuserconfigdata,
realmdomain/realmalias. Probably there's no action required here,
but we could do rename operations if desired.
Generated using manage.py squashmigrations, with minimal manual
surgery to replace the original migration.
This should in theory produce the exact same database state as
previously.
This has no functional changes, but it helps the squashmigrations tool
realize some squashing opportunities to not have models declared
before other models that they will gain a foreign key to.
Since it's the initial migration, this can't have any useful effect.
I'm pretty sure the backstory is we did a manual squash of migrations
during the process of open-sourcing Zulip, and incorrectly didn't
remove this code.
It's not possible to directly upgrade from pre-5.3 versions to main,
since they do not have any supported OSes in common. Thus, this
dependency on the confirmation model, which risks creating a circular
dependency if we squash migrations, can be removed.
Adds some validation for changing the realm's max invites via the
support view so that it is not set below the default max for the
realm's plan type, and so that if it's currently set to the default
max it's not reset to that same value.
This commit adds a new `group_size` field to the `DirectMessageGroup`
model, and backfills its value to each of the existing direct message
groups.
Fixes part of #25713
Earlier, we were using 'send_event' in 'do_change_is_billing_admin'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_update_outgoing_webhook_service' which can lead to a
situation, if any db operation is added after the 'send_event'
in future, where we enqueue events but the action function fails
at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_set_push_notifications_enabled_end_timestamp' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_set_realm_stream' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_set_realm_authentication_methods' which can lead to a situation
where we enqueue events but there's an error at a later stage in
the codepath using this function.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in do_set_realm_user_default_setting
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_regenerate_api_key'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_change_full_name'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
The Slack API when returning the emoji records, returns the record for
its thumbsup_all emoji with the url ending with .png, even though the
file is a gif.
For that reason, we have to make that code correct file extensions based
on the response content-type. Emojis are the smallest set of images to
download, so for simplicity of implementation, we remove the
parallelization of the downloads in favor of just processing them
serially.
Ideally this would besplit up into two commits, but it's hard to split
into self-contained, atomic chunks now that this segment of the
import/export system is generally kind of broken after thumbnailing
system changes.
1. 3rd party export converters don't make .original image files.
Insteadthey provide a single file, which the import should treat as if
it's .original.
2. 3rd party converters create all the records with is_animated=False.
That's an issue, because without setting that correctly on the
RealmEmoji objects, Zulip doesn't know that it should use the "still"
thumbnail when the emoji is being used in a user's status. Which leads
to incorrectly displaying the user status with the distracting
animation.
The export tool was only exporting the already-thumbnailed emoji file,
omitting the original one. Now we make sure to export the .original file
too, like we do for avatars, and make the import tool process it
directly, to thumbnail it directly and generate a still in the case of
animated emojis.
Otherwise, the imported realm wouldn't have the <emoji>.png.original
file that we generally expect to have accessible, and stills for
animated emojis were completely missing.
Convert `message_send.py` use `typed endpoint`.
Disable `message_send` endpoint `to` parameter in the `openapi`
`validate_json_schema` check, because it is a special case where the
content type of the parameter is application/json but the
parameter may or may not be JSON encoded since previously we also
accepted a raw string and some ad-hoc bot might still depend on sending
a raw string.
Remove unused validators from `validator.py`.
This commit updates the db transaction to be durable for
do_update_user_custom_profile_data_if_changed to avoid
addition of any outer atomic block.
While adding any outer atomic block this will raise a runtime error
and we can replace the durable argument with 'savepoint=False'
otherwise we'll have to manually track down the action functions
getting called in that outer atomic block and set the savepoint=False
otherwise it'll lead to creation of savepoints which we don't want.
We can't set savepoint=False before hand to the outermost action
function because it leads to rollback of transaction in tests when
an error is raised in action function.
Earlier, we were using 'send_event' in
check_remove_custom_profile_field_value which can lead to a
situation where we enqueue events but the function fails at a
later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
There was a bug here that would trigger an exception inside
`sync_user_profile_custom_fields`, causing it to get logged with
logging.warning, when an attribute configured for SAML custom profile
field sync was missing from a SAMLResponse or had an empty value.
`sync_user_profile_custom_fields` expects valid values, and None is not
valid.
We could consider a slightly different behavior here instead - when an
attribute is sent with no value in the SAMLResponse, that means the attr
has no value in the IdP's user directory - so perhaps a better behavior
would be to also remove the custom profile field value in Zulip. However
there are two issues with that:
1. It's not necessarily the best behavior, because an organization might
want the "user doesn't have this attribute set at the IdP level" state
to just mean that the user should be free to set the value manually in
Zulip if they wish. And having that value get reset on every login would
then be an issue. The implementation in this commit is consistent with
this philosophy.
2. There's some implementation difficulty - upstream
`self.get_attr(...)`, which we use for reading the attr value from the
SAMLResponse, doesn't distinguish between an attribute being sent with
no value and the attribute not being sent at all - in both cases it
returns None. So we'd need some extra work here with parsing the
SAMLResponse properly, to be able to know when the custom profile field
should get cleared.
Replace the SOCIAL_AUTH_SYNC_CUSTOM_ATTRS_DICT with
SOCIAL_AUTH_SYNC_ATTRS_DICT, designed to support also regular user attrs
like role or full name (in the future).
Custom attributes can stay configured as they were and will get merged
into SOCIAL_AUTH_SYNC_ATTRS_DICT in computed_settings, or can be
specified in SOCIAL_AUTH_SYNC_ATTRS_DICT directly with "custom__"
prefix.
The role sync is plumbed through to user creation, so users can
immediately be created with their intended role as provided by the IdP
when they're creating their account, even when doing this flow without
an invitiation.
This commit make changes in code to include can_manage_group
field to UserGroup objects passed with response of various endpoints
including "/register" endpoint and also in the group object
send with user group creation event.
Earlier there was only a realm level setting for configuring
who can edit user groups. A new group level setting is also added
for configuring who can manage that particular group.
Now, a user group can be edited by a user if it is allowed from
realm level setting or group level setting.
This commit make changes to also use group level setting
in determining whether a group can be edited by user or not.
Also, updated tests to use api_post and api_delete helpers instead
of using client_post and client_delete helpers with different users
being logged in.
This commit adds a new group level setting can_manage_group
for configuring who can manage a group. This commit only adds
the field in database and make changes to automatically create
single user groups corresponsing to acting user
which will be the default value for this setting.
Fixes part of #25928.
Earlier there was a single backend test for testing group edit policy
for creating and deleting user group.This commit make changes in the test
and now there are two separate tests for testing group edit policy for
creating and deleting user groups.
This was done because in future commits we will be adding a
realm level setting for configuring who can create user groups.
Also, updated tests to use api_post and api_delete helpers instead
of using client_post and client_delete helpers with different users
being logged in.
Earlier there was a single decorator function to check whether
user can create and edit user groups. This commit adds a new
decorator function to check whether user has permissions to
create user groups.
This was done because in future commits we will be adding a
realm level setting for configuring who can create user groups.
This commit refactors code in user_groups_in_realm_serialized
such that we do not prefetch "can_mention_group__direct_members"
and "can_mention_group__direct_subgroups" using prefetch_related
and instead fetch members and subgroups for all groups in separate
queries and then use that data to find the members and subgroups
of the group used for that setting.
This change helps us in avoiding two prefetch queries for each
setting when we add more group settings.
Earlier, we were using 'send_event' in
do_change_stream_message_retention_days which can lead to a situation
where we enqueue events but the function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in do_change_stream_description
which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in do_change_stream_post_policy
which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
This commit adds a transaction.atomic decorator to the
'do_change_subscription_property' function to make
the db operations in the action function atomic.
Also, send_event is changed to send_event_on_commit.
Earlier, we were using 'send_event' in 'do_unmute_user'
which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_mute_user' which
can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in do_update_message_flags
which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'do_mark_muted_user_messages_as_read' which can lead to a
situation where we enqueue events but the function fails at a
later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
In 'do_mark_all_as_read', the transactions which mark the messages
as read in batches should be marked as durable to avoid addition
of any outer atomic block as we support marking a few batches
(not all messages) as read in the case of a timeout.
Earlier, we were using 'send_event' in do_mark_stream_messages_as_read
codepath which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
'do_update_message' is within a db transaction, this commit
updates the 'do_clear_mobile_push_notifications_for_ids' function
used in 'do_update_message' to queue event on commit.
Events should not be sent until we know we're not rolling back,
otherwise it can lead to a situation where we enqueue events but
the function fails at a later stage.
Previously, this logic did the database queries to look up UserProfile
objects in a loop.
Fixes#21820.
Significantly improves Stream creation time and also unsusbcribing users.
Tested stream creation with 10k stream subscribers:
- before: 127 seconds ~2 mins
- after: 17 seconds ~0.3 min
Add a test case for user unsubscribing themself.
This hints to the browser that it should start DNS lookups for the
host, since it is likely to be necessary. It is a softer form than
`rel-preconnect`, which may be unnecessary in these cases, if the
client has the resources cached already.
Show user card popover for scheduled messages overlay, compose box
preview, message edit preview, message edit history.
`.messagebox` was chosen as the selector since that was the nearest
parent class that was common for all of the above.
`@all` does not have a popover and that's why it will have the same
pointer as its parent element. We also introduce a new class called
`.user-mention-all` for managing css rules specific to that mention.
The 'tutorial_status' field on 'UserProfile' model is
no longer used to show onboarding tutorial.
This commit removes the 'tutorial_status' field,
'POST users/me/tutorial_status' endpoint, and
'needs_tutorial' parameter in 'page_params'.
Fixes part of zulip#30043.
We plan to remove the 'tutorial_status' field from UserProfile
table as it is no longer used to show tutorial.
The field is also used to narrow a new user in DM with
welcome bot on the first load.
This prep commit updates the logic to use a new OnboardingStep
for the narrowing behaviour on the first load. This will help
in removing the 'tutorial_status' field.
Other than reformatting documentation for Open Collective, this
commit also moves it to the "Financial" category from "Communications".
This is because Open Collective is mainly a fundrising + legal status +
money management platform, as stated in https://opencollective.com/.
Part of #29592.
Besides reformatting the Netlify doc, this commit also updates the
instructions to match some UI changes in Netlify. The "Outgoing Webhook"
menu is now called "HTTP Post request".
Part of #29592.
Earlier, we were replacing too long attachment name with random uuid
when the character count of the file name was greater than 255.
This results in "OSError: [Errno 36] File name too long" error in
few cases when the file name has less than 255 characters but more
than 255 bytes (file name with Non-ASCII characters).
This commit updates the code to check the file name's byte size
instead of characters count.
A utility command to enable or disable certain authentication backends
for a realm from the command line. Can be helpful e.g. if the
administrator accidentally disables some auth methods in the UI leaving
themselves with none remaining that they could actually use to log back
into the organization.
Example usage:
```
(zulip-py3-venv) vagrant@c32c137f59a0:/srv/zulip$ ./manage.py change_auth_backends -r zulip --show
Current authentication backends for the realm:
Enabled backends:
Dev
Email
GitHub
GitLab
Google
Apple
SAML
OpenID Connect
(zulip-py3-venv) vagrant@c32c137f59a0:/srv/zulip$ ./manage.py change_auth_backends -r zulip --disable GitHub
Disabling GitHub backend for realm Zulip Dev
Updated authentication backends for the realm:
Enabled backends:
Dev
Email
GitLab
Google
Apple
SAML
OpenID Connect
Disabled backends:
GitHub
Done!
(zulip-py3-venv) vagrant@c32c137f59a0:/srv/zulip$ ./manage.py change_auth_backends -r zulip --enable GitHub
Enabling GitHub backend for realm Zulip Dev
Updated authentication backends for the realm:
Enabled backends:
Dev
Email
GitHub
GitLab
Google
Apple
SAML
OpenID Connect
Done!
```
Earlier, we were using 'send_event' in
'notify_realm_custom_profile_fields' which can lead to a situation,
if any db operation is added after the 'send_event' in the action
functions using it, where we enqueue event but the action function
fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
The database operations in 'access_user_group_for_setting' and
'check_add_user_group' used in 'add_user_group' view should be
collectively atomic.
This commit adds transaction.atomic decorator for that purpose.
Earlier, we were using 'send_event' in 'delete_user_grou' codepath
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'edit_user_group' codepath
which can lead to a situation where we enqueue events but the
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_set_zoom_token' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_delete_draft' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_edit_draft' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_create_drafts' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'delete_scheduled_messages'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'edit_scheduled_messages'
which can lead to a situation, if any db operation is added after
the 'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in
'try_deliver_one_scheduled_messages' which can lead to a situation
where we enqueue events but the action function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_mark_onboarding_step_as_read'
which can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
This deletes them directly, rather than move them into archival,
because that would be slow, and bloat the archival table in a way
which might interfere with other deletions.
Previously, comment related notifications only displayed the issue
title as a plain string. This commit reformats the issue title to
include a link back to the Jira issue.
Adjusted the Jira documentation for recent changes in their UI
when setting up webhooks, reformatted the note about compatible
Jira version, and added a link to Jira's official webhook guide.
Note that the link in zulip_update_announcements.py is not updated
so that the content in the source code reflects what users actually
received in the update announcement message.
This bug was introduced in da9e4e6e54.
validate validate_plan_for_authentication_methods is already called
inside validate_authentication_methods_dict_from_api, conditionally on
settings.BILLING_ENABLED. This additional, redundant call runs
regardless of BILLING_ENABLED, and thus prevents a self-hosted server
from enabling certain backends in the organization settings UI.
The impact of this is limited - in order to encounter this bug, a
self-hosted server would have to first disable the backend in the UI, as
self-hosted realms are created with all backend flags enabled. A backend
doesn't show up in the org settings UI until it is first enabled in
AUTHENTICATION_BACKENDS in settings.py - that's why this is a rare
state. A sequence of steps like this has to be followed to reproduce:
1. Add the backend to AUTHENTICATION_BACKENDS in settings.py.
2. Disable the backend in the org settings UI.
3. Now try to re-enable it, which fails due to the bug.
This commit removes create_web_public_stream_policy setting
since web-public channel creation permissions are now
handled by group-based setting.
We still pass "realm_create_web_public_stream_policy" in
"/register" response though for older clients with its
value being set depending on the value of group based
setting. If we cannot set its value to an appropriate enum
corresponding to the group setting, then we set it to
"Admins and moderators" considering that server will not
allow the users without permissions to create web-public
channels but the client can make sure that UI is
available to the users who have permission.
Messages are rendered outside of a transaction, for performance
reasons, and then sent inside of one. This opens thumbnailing up to a
race where the thumbnails have not yet been written when the message
is rendered, but the message has not been sent when thumbnailing
completes, causing `rewrite_thumbnailed_images` to be a no-op and the
message being left with a spinner which never resolves.
Explicitly lock and use he ImageAttachment data inside the
message-sending transaction, to rewrite the message content with the
latest information about the existing thumbnails.
Despite the thumbnailing worker taking a lock on Message rows to
update them, this does not lead to deadlocks -- the INSERT of the
Message rows happens in a transaction, ensuring that either the
message rending blocks the thumbnailing until the Message row is
created, or that the `rewrite_thumbnailed_images` and Message INSERT
waits until thumbnailing is complete (and updated no Message rows).
This migration references the "confirmation" app for the first time,
which means we must have migrated at least part of it by this point.
Set the migration to depend on the latest "confirmation" migration at
the time of this migration.
Migrate the following endpoints from @has_request_variables
to @typed_endpoint:
- get_user_group()
- delete_user_group()
- update_user_group_backend()
- update_subgroups_of_user_group()
- get_is_user_group_member()
- get_user_group_members()
- get_subgroups_of_user_group()
With tweaks from tabbott to avoid calling thunks unnecessarily.
Earlier, we were using 'send_event' in 'do_schedule_messages' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_update_user_status' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were immediately enqueueing event in
'do_remove_alert_words' which can lead to a situation, if any
db operation is added after enqueueing event in future, where the
action function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
Earlier, we were using 'send_event' in 'do_add_alert_words' which
can lead to a situation, if any db operation is added after the
'send_event' in future, where we enqueue events but the action
function fails at a later stage.
Events should not be sent until we know we're not rolling back.
Fixes part of #30489.
GitHub sends two almost identical payloads when a pull
request is reviewed, which results in two duplicative
notification messages.
The payloads have different "action" value, with one
having the "submitted" action, whereas the other is
"edited" and has an empty "changes" dict.
We now ignore the payload with the "edited" action type
and an empty "changes" dict.
Fixes#26145.
Co-authored-by: Pieter CK <pieterceka123@gmail.com>
Providing a signed Camo URL for arbitrary URLs opened the server up to
being an open redirector. Return 403 if the URL is not a user upload,
and the backend image if it is. Since we do not have ImageAttachment
rows for uploads at a time we wrote `/thumbnail?` URLs, return the
full-size content.
47683144ff switched the web client to prefer the 840x560 size, as the
mobile apps prefer; remove the now-unused 300x200 size. No client was
using the generated `.jpg` formats, as all clients support `.webp`, so
remove the unused `.jpg` thumbnail as well.
Modern browsers respect the EXIF orientation information of images,
applying rotation and/or mirroring as specified in those tags. The
the `width="..."` and `height="..."` tags are to size the image
_after_ applying those orientation transformations.
The `.width` and `.height` properties of libvips' images are _before_
any transformations are applied. Since we intend to use these to hint
to rendering clients the size that the image should be _rendered at_,
change to storing (and providing to clients) the dimensions of the
rendered image, not the stored bytes.
If the email subject is something like `Fwd:`, it gets stripped to an
empty string, activating the "(no topic)" override. This however leads
to failure if the organization enables the setting forcing every message
to have a topic. Such emails should still go through, so we should just
change the topic value used.
This allows clients to potentially lay out the thumbnails more
intelligently, or to provide a better "progressive-load" experience
when enlarging the thumbnail.
Previously animated images were automatically played in the
message feed of the web app.
Now that we have still thumbnails available for them, we can add a new
personal setting, "web_animate_image_previews", which controls how the
animated images would be played in the web app message feed -- always
played, on hover, or only in the image viewer.
Fixes#31016.
In 'fetch_initial_state_data' we were doing one database query
per announcement stream.
This commit updates the logic to prefetch those streams using
select_related hence avoiding the extra db queries.
Fixes#28909.
The libvips cache is 100MB, 100 operations, or 100 files, whichever is
less. A single Django process or worker is extremely unlikely to ever
see the same image twice, much less within those timeframes.
Disable the cache, since it is mostly useless memory usage for our use
case.