For endpoints with a type parameter to indicate whether a message is
a direct or stream message, adds support for passing "channel" as a
value for stream messages.
Part of stream to channel rename project.
Creating a bot with a name that is already in use
will raise an error. However, by deactivating
the existing bot, creating a new bot with the
same name, and then reactivating the original bot,
it is possible to have multiple bots with the same name.
To fix this, we check if the bot name is already
in use in the active bots list. If it is,
an error will be raised, prompting either the
name of the existing bot to be changed or
the bot to be deactivated.
Co-authored-by: Sujal Shah <sujalshah28092004@gmail.com>
Adds "/invites/multiuse" endpoint to the API documentation.
Creates a shared schema for the invite_as and invite_expires_in_minutes
parameters that are the same for the "POST /invites" endpoint.
Also, updates the response documented for the "GET /invites" endpoint
to match the information in the "POST /invites" and "/invites/multiuse"
documentation.
Adds "channel" to the `stream_wildcards` frozenset for stream
wildcard notifications on the backend/server.
Updates frontend/web-app to handle "channel" as the other stream
wildcards are handled in the typeahead and composebox modules.
Updates the API version and documentation for the addition of
"channel" as a wildcard mention. But does not change any of the
functionailty of (or deprecate) the "stream" wildcard at this
point.
Part of project to rename "stream" to "channel".
Earlier, when adding a new user failed due to no spare licenses
available, a message was sent to the "New user announcements"
stream.
We plan to disable the stream by default as a part of improving
onboarding experience.
Now, we send a group DM to admins when adding a new user fails
due to no spare licenses available. It makes it independent of
the "New user announcements" setting. These warning messages
are important and shouldn't be missed.
Earlier, low licenses warning message was sent to the
"New user announcements" stream.
We plan to disable the stream by default as a part of improving
onboarding experience.
Now, we send a group DM to admins for low licenses warning
to make it independent of the setting. These warning messages
are important and shouldn't be missed.
This is a prep commit to add a 'recipient_users' parameter to
the 'internal_send_huddle_message' function.
'emails' is no longer a required parameter. We can use either
of the 'emails' or 'recipient_users' parameter. 'emails' is
eventually used to fetch 'recipient_users', so if the
'recipient_users' is already available we should use that to
skip database query.
The `zerver/0501_delete_dangling_usermessages` was backported to the
`8.x` branch (and the 8.3 release) in 3db1733310. However, because
`main` contained migrations which `8.x` did not, it was backported
with a different `dependencies`:
```
dependencies = [
("zerver", "0496_alter_scheduledmessage_read_by_sender"),
]
```
...as opposed to in `main`:
```
dependencies = [
("zerver", "0500_realm_zulip_update_announcements_stream"),
]
```
This causes upgrades from 8.3 to `main` to fail:
```
django.db.migrations.exceptions.InconsistentMigrationHistory:
Migration zerver.0501_delete_dangling_usermessages is applied before
its dependency zerver.0500_realm_zulip_update_announcements_stream on
database 'default'.
```
Adjust the dependencies in `main` to match those in `8.x` where many
deploys will first have encountered the migration.
For organizations with "Zulip update announcements" stream set
to a default value, we wait for one day after sending group
DM to admins to allow them to change the stream from it's
default value if they wish to.
This prep commit refactors the function
'is_group_direct_message_sent_to_admins_atleast_one_week_ago' to
'is_group_direct_message_sent_to_admins_within_days' allowing us
to use a flexible timeframe instead of hardcoded 1 week.
We will reuse this function as a part of determinig whether
group DM to admins was sent within 1 day.
Previously, users were allowed to signup or change their names to
those which already existed in the realm.
This commit adds an Organization Permission, that shall enforce
users to use unique names while signing up or changing their
names. If a same or normalized full name is found in realm,
then a validation error is thrown.
Fixes#7830.
Previously, email addresses that weren't connected to a Zulip account
were ignored but now they receive an email stating their email isn't
connected to a Zulip account.
Also, removes the "Thanks for using Zulip!" line at the end of the
find accounts email that's sent when a Zulip account is found.
Updates the i18n test that used this string with another in the
German translation from this a successful account found email.
Fixes part of #3128
Co-authored-by: Lauryn Menard <lauryn@zulip.com>
Updates the help link in the find team emails to use the external
host information.
Removes the link for the external host since the realm links are
what the user should click on to login.
Also, passes corporate_enabled to the find team email to adjust
the text for Zulip Cloud emails.
Restructures the integration documentation pages to use a style
that's more similar to the help center documentation, with an
instruction block for setting up the integration, and sections
for additional configuration information and related documentation
links.
Updates the doc pages for the airbrake, azuredevops and gitlab
integrations as examples of the updated style.
Also updates the URL specification section of the incoming webhook
overview in the API documentation so that the documented URL
parameters can be linked to directly in the integration doc pages.
Co-authored-by: Alya Abbott <alya@zulip.com>
This was only used in the undocumented narrow_stream mode, and relied
on a deprecated synchronous XHR request.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For multiline strings in triple quotes, a '\n' is included
at the end of each line.
Earlier, to skip '\n' we used to add an escape character '\'
at the end of each line.
This commit adds a function to avoid manually adding '\'.
As a part of the zulip news feature, we send an initial
group DM to admins suggesting them to update or set
the 'zulip_update_announcements_stream'.
This commit updates those messages to make it better.
Updates the check email translation test for updated email text in
confirm_new_email.html and onboarding_zulip_topics.html for current
translated strings in German.
As noted in the docstring for `bulk_insert_ums`, this is at least one
order of magnitude faster than using `bulk_create`. This also
includes a `ON CONFLICT DO NOTHING` which allows multiple
soft-reactivations to run at once without failing. We also adjust the
update of `last_active_message_id` to be safe against races.
Rather than use a bulk insert via Django, use the faster
`bulk_insert_all_ums` that we already have. This also adds a `ON
CONFLICT` clause, to make the insert resilient to race conditions.
There are currently two callsites, with different desired `ON
CONFLICT` behaviours:
- For `notify_reaction_update`, if the `UserMessage` had already been
created, we would have done nothing to change it.
- For `do_update_message_flags`, we would have ensured a specific bit
was (un)set.
Extend `create_historical_user_messages` and `bulk_insert_all_ums` to
support `ON CONFLICT (...) UPDATE SET flags = ...`.
The bots do not exist in the user table to look up their active
status, and attempting to import them into the analytics table will
result in duplicate rows.
Replace the long string for organisations that have notification
body/content disabled (settings.PUSH_NOTIFICATION_REDACT_CONTENT
set to true) with "New message".
This allows more of the limited space on the mobile device screen to
be used for additional messages rather than this verbose content.
Fixes#29152
Using --rotate-key without write access to the secrets file is currently
quite painful, since you end up rotating your registration's secret with
no local record of it; so effectively you lose your registration and
need help from support. We should just prevent this failure mode.
Previously, #26419 addressed the majority of these calls, but did not
prevent more from creeping in. Remove the one remaining
callsite (after the cleanup from the previous commits), and ban any
future use of the pattern.
For the common case of not needing to reference the UserMessage row
later, and for being a stream without private history, the UserMessage
row is irrelevant. Convert `has_user_message` to a thunk, and defer
loading it unless necessary.
Calling `.select_related()` with no arguments joins through every
possible table, recursively. In this case, this currently produces a
query which joins through forty-three tables.
This is rather inefficient, particularly for what is a very common
call which should be very fast.
No callsite depends on having prefetched any joined table on the
object; drop all of the joins.
Replaced HUDDLE attribute with DIRECT_MESSAGE_GROUP using VS Code search,
part of a general renaming of the object class.
Fixes part of #28640.
Co-authored-by: JohnLu2004 <JohnLu10212004@gmail.com>
Adds a line to the top of the internal_billing_notice email with
the billing entity's display name.
Makes sure all internal_billng_notice email subjects also include
the billing entity's display name.
Makes small updates to the notice text for some cases.
This commit adds a management command that will run regularly
as a cron job to send zulip updates to realms based on their
current and latest zulip_update_announcements_level.
For realms with:
* level = None: Send a group DM to admins notifying them about
this new feature & suggestion to set the stream accordingly.
* level = 0:
* If stream is still not configured, wait for a week
before setting their level to latest level. They will
miss updates until their configure the stream.
* If stream is configured, send updates.
* level > 0: Send one message/update per level & increase
the level by 1 till the latest level.
Fixes#28604.
This is a prep commit to extract out the logic to
create message from 'internal_send_huddle_message'
into a separate function 'internal_prep_huddle_message'.
We will use this new function to get the huddle message
without sending it immediately.
In general, we never want to use savepoints.
This prep commit adds savepoint=False in do_send_messages
as we don't want to just rollback to this savepoint and
proceed if we encounter any error while sending zulip updates
via cron.
A user who was no longer subscribed to a private stream kept their
UserMessage row for a message sent while they were in it; this is
expected. However, they _also_ kept that row even if the message was
moved to a different private stream that they were also not subscribed
to. This violates the invariant that users without subscriptions
never have UserMessage rows.
This `if new_stream is not None` block was improperly indented,
causing it to only run if the propagation mode was not `change_one`.
Since the block controlled creation and deletion of UserMessage rows,
this led to messages being improperly still visible to members of the
old stream if they were being moved from public to private streams.
Clients also failed to receive `delete_message` events, so the
messages remained visible in their feeds until they reloaded the
application.
We don't want to show one-time modals introducing 'Inbox' and
'Recent conversations' views to existing users.
When a user views a modal, we mark it as read by storing a row
in the 'OnboardingStep' model. Once marked as read, the user will
no longer see the modal.
This commit adds a migration to create
two rows per user (mark them as read) in OnboardingStep model.
To improve onboarding experience, this commit adds a
one-time modal which introduces the recent conversations view.
Users see this one-time modal on visiting the recent
conversations view.
Fixes#29073.
To improve onboarding experience, this commit adds
a one-time modal which introduces the inbox view.
Users see this one-time modal on visiting the inbox view.
Fixes part of #29073.
Replace a separate call to subprocess, starting `node` from scratch,
with an optional standalone node Express service which performs the
rendering. In benchmarking, this reduces the overhead of a KaTeX call
from 120ms to 2.8ms. This is notable because enough calls to KaTeX in
a single message would previously time out the whole message
rendering.
The service is optional because he majority of deployments do not use
enough LaTeX to merit the additional memory usage (60Mb).
Fixes: #17425.
Links to the available message flag table in the feature level 224
changelog entry, as there are relevant **Changes** notes for this
feature level in that part of the API documentation.
Updates the order and formatting of these new and deprecated flags
in the available flags table. Also, adds a link to the topic
wildcard mentions section of the help center documentation.
Makes small clean ups to the changes notes for this feature level,
as well as the changelog entry itself.
The original commit for these feature level 224 API changes was
c597de6a1d.
Refactor `parse_client` view to use `typed_endpoint decorator`
instead of `has_request_variables`. This change improves code consistency
and enhances codebase comprehension.
PostgreSQL's estimate of the number of usermessage rows for a single
message can be wildly off, due to poor statistics generation. This
causes this query, with 100-message batch sizes, to incorrectly
estimate millions of matched rows, causing it to perform a full-table
index scan, rather than piecemeal using the `message_id` index.
Reduce the batch size to 50, which is enough to tip in favor of a
rational query plan.
Refactor `report_csp_violations` view to use `typed_endpoint` decorator
instead of `has_request_variables`. This change improves code
consistency and enhances codebase comprehension.
Depending on the kind of config error being shown, different "go back"
links may be more appropriate.
We probably hard-coded /login/ for it, because these config errors are
most commonly used for authentication backend config error, where it
makes sense to have /login/ as "go back", because the user most likely
indeed got there from the login page.
However, for remote_billing_bouncer_not_configured, it doesn't make
sense, because the user almost surely is already logged in and got there
by clicking "Plan management" inside the gear menu in the logged in app.
It's best for these to just be consistent. Therefore:
1. The .../not-configured/ error page endpoint should be restricted to
.has_billing_access users only.
2. For consistency, self_hosting_auth_view_common is tweaked to also do
the .has_billing_access check as the first thing, to avoid revealing
configuration information via its redirect/error-handling behavior.
The revealed configuration information seems super harmless, but it's
simpler to not have to worry about it and just be consistent.
Just shows a config error page if the bouncer is not enabled. Uses a new
endpoint for this so that it can work nicely for both browser and
desktop app clients.
It's necessary, because the desktop app expects to get a json response
with either an error or billing_access_url to redirect to. Showing a
nice config error page can't be done via the json error mechanism, so
instead we just serve a redirect to the new error page, which the app
will open in the browser in a new window or tab.
Only affects zulipchat, by being based on the BILLING_ENABLED setting.
The restricted backends in this commit are
- AzureAD - restricted to Standard plan
- SAML - restricted to Plus plan, although it was already practically
restricted due to requiring server-side configuration to be done by us
This restriction is placed upon **enabling** a backend - so
organizations that already have a backend enabled, will continue to be
able to use it. This allows us to make exceptions and enable a backend
for an org manually via the shell, and to grandfather organizations into
keeping the backend they have been relying on.
Adds a re-usable lockfile_nonblocking helper to context_managers.
Relying on naive `os.mkdir` is not enough especially now that the
successful operation of this command is necessary for push notifications
to work for many servers.
We can't use `lockfile` context manager from
`zerver.lib.context_managers`, because we want the custom behavior of
failing if the lock can't be acquired, instead of waiting.
That's because if an instance of this gets stuck, we don't want to start
queueing up more processes waiting forever whenever the cronjob runs
again and fail->exit is preferrable instead.
When a server doesn't submit a remote realm info which was
previously submitted, we mark it as locally deleted.
If such a realm has paid plan attached to it, we should investigate.
This commit adds logic to send an email to sales@zulip.com for
investigation.
This commit updates default for delete_own_message_policy
setting to "Everyone" as it is helpful to allow everyone
to delete their own messages in a new organization where
users might be using Zulip for the first time.
This commit updates default for move_messages_between_streams_policy
setting to "Members and above" as it is helpful to allow members
to move messages between streams in new organizations where users
might be using Zulip for first time.
The presence of `len(messages)` outside the transaction caused the
full resultset to be fetched outside of the transaction. This should
ideally be inside the transaction, and also only need be the count.
However, also note that the process of counting matching rows, and
then executing a second query which embeds the same query, is
susceptible to phantom reads, where a query with the same conditions
returns different resultsets, under PostgreSQL's default transaction
isolation of "read committed." While this is possible to resolve by
pulling the returned IDs into a Python list, it would not address the
issue that concurrent updates which change the resultset would make
the overall algorithm still incorrect.
Add a comment clarifying the conditions under which the algorithm is
correct. A more correct algorithm would walk the UserMessage rows
which are unread and in the stream, but this requires a
whole-UserMessage index which would be quite large for such an
infrequent use case.
This makes no immediate reloads the default for runtornado, matching
the production configuration, and changes the development incantation
to be the one to specify the departure from the norm, with
--immediate-reloads.
LoggingCountStats with a daily duration and that are directly stored
on the RealmCount table (not via aggregation in process_count_stat),
can be in a state, after the hourly cron job to update analytics
counts, where the logged value will be live-updated later, because
the end time for the stat is still in the future.
As these logging counts are designed to be used on the self-hosted
installation for either debugging or rate limiting, sending these
partial/incomplete counts to the bouncer has low value.
Due to the channel_map_to_topics URL parameter in the Slack webhook,
it was not migrated to use the check_send_webhook_message.
By using check_send_webhook_message, any topic parameter in the
webhook URL will be prioritized over mapping Slack channels to
topics, e.g. when channel_map_to_topics is true. This is because
the default behaviour for incoming webhooks is to send a default
topic as a parameter to check_send_webhook_message in case there
is no topic specified in the URL.
In contrast, we can override the stream passed in the URL when
channel_map_to_topics is false by passing the Slack channel name
to check_send_webhook_message. The default behaviour for incoming
webhooks is to send a direct message if there is no specified
stream in the URL, so a default stream is not generally passed
to check_send_webhook_message.
Fixes#27601.
This commit adds a realm-level setting named
'zulip_update_announcements_stream' that configures the
stream to which zulip updates should be posted.
Fixes part of #28604.
- Adds instructions for downloading a zuliprc file for a bot or for
yourself.
- Updates the button label to "Download zuliprc", since that's the
filename it downloads.
Fixes#28881.
The previous logic incorrectly used the server-level number of users
even when a (presumably smaller) realm-level count was available.
Fixes a bug introduced in 2e1ed4431a.
This commit renames the realm-level setting
'signup_notifications_stream' to 'signup_announcements_stream'.
The new name reflects better what the setting does.
This commit renames the realm-level setting 'notifications_stream'
to 'new_stream_announcements_stream'.
The new name reflects better what the setting does.
5c96f94206 mistakenly appended, rather than prepended, the edit to
the history. This caused AssertionErrors when attempting to view the
history of moved messages, which check that the `last_edit_time`
matches the timestamp of the first edit in the list.
Fix the ordering, and update the `edit_history` for messages that were
affected. We limit to only messages edited since the commit was
merged, since that helps bound the affected messages somewhat.
RemoteRealm customer takes precedence over RemoteServer
in general. But if an inactive plan is associated with
RemoteRealm and an active plan with RemoteServer, the
ACTIVE plan takes precendence.
Co-authored-by: Prakhar Pratyush <prakhar@zulip.com>
Previously, in DM disabled org messaging to bot was not working when
starting new conversation and adding bot as recipient because of not
updating on recipient change. And secondly, self messaging was not
allowed.
This commit ensures that the DM to bot and self are allowed irrespective
of dm restrictions.
tests: Verify DMs adhere to DM restriction policy.
Fixes#28412
Signed-off-by: sayyedarib <sayyedaribhussain4321@gmail.com>
The widening of the time between when a process is marked for
reload (at Tornado startup) and when it sends reload events makes it
unlikely-to-impossible that a single `/` request will span both of
them, and thus hit the WebReloadClientError corner case.
Remove it, as it is not worth the complication. The bad behaviour it
is attempting to prevent (of a reload right after opening `/`) was
always still possible -- if the `/` request completed right before
Tornado restarted -- so it is not clear that it was ever worth the
complication.
Collapsing was done incorrectly, as 65c400e06d added `zulip_version`
and `zulip_feature_level`, but did not update the virtual event logic
to copy those new values into the virtual event.
However, it is unlikely that a server will be upgraded multiple times
in quick enough succession for this to ever be relevant. Remove the
logic, which is additional complication for little or no gain.
Commit bd6471f0e3 (#28691) added this
reference to the old name, even though it had already been renamed in
commit b220d29fed (#17775), presumably
because that had failed to update the OpenAPI description.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Having a non-identity `cache_transformer` is no different from running
it on every row of the query_function. Simplify understanding of the
codepath used in caching by merging the pieces of code.
Rather than pass around a list of message objects in-memory, we
instead keep the same constructed QuerySet which includes the later
propagated messages (if any), and use that same query to pick out
affected Attachment objects, rather than limiting to the set of ids.
This is not necessarily a win -- the list of message-ids *may* be very
long, and thus the query may be more concise, easier to send to
PostgreSQL, and faster for PostgreSQL to parse. However, the list of
ids is almost certainly better-indexed.
After processing the move, the QuerySet must be re-defined as a search
of ids (and possibly a very long list of such), since there is no
other way which is guaranteed to correctly single out the moved
messages. At this point, it is mostly equivalent to the list of
Message objects, and certainly takes no less memory.
Rather than use `bulk_update()` to batch-move chunks of messages, use
a single SQL query to move the messages. This is much more efficient
for large topic moves. Since the `edit_history` field is not yet
JSON (see #26496) this requires that PostgreSQL cast the current data
into `jsonb`, append the new data (also cast to `jsonb`), and then
re-cast that as text.
For single-message moves, this _increases_ the SQL query count by one,
since we have to re-query for the updated data from the database after
the bulk update. However, this is overall still a performance
improvement, which improves to 2x or 3x for larger topic moves. Below
is a table of duration in seconds to run `do_update_message` to move a
topic to a new stream, based on messages in the topic, for before and
after this change:
| Topic size | Before | After |
| ---------- | -------- | ------- |
| 1 | 0.1036 | 0.0868 |
| 2 | 0.1108 | 0.0925 |
| 5 | 0.1139 | 0.0959 |
| 10 | 0.1218 | 0.0972 |
| 20 | 0.1310 | 0.1098 |
| 50 | 0.1759 | 0.1366 |
| 100 | 0.2307 | 0.1662 |
| 200 | 0.3880 | 0.2229 |
| 500 | 0.7676 | 0.4052 |
| 1000 | 1.3990 | 0.6848 |
| 2000 | 2.9706 | 1.3370 |
| 5000 | 7.5218 | 3.2882 |
| 10000 | 14.0272 | 5.4434 |
This applies access restrictions in SQL, so that individual messages
do not need to be walked one-by-one. It only functions for stream
messages.
Use of this method significantly speeds up checks if we moved "all
visible messages" in a topic, since we no longer need to walk every
remaining message in the old topic to determine that at least one was
visible to the user. Similarly, it significantly speeds up merging
into existing topics, since it no longer must walk every message in
the new topic to determine if the user could see at least one.
Finally, it unlocks the ability to bulk-update only messages the user
has access to, in a single query (see subsequent commit).
This is a preparatory commit that refactors the check_update_message
method to extract the checks containing whether a user can edit the
message or not into a separate method -validate_message_content_edit,
so that it can be re used later.
This logic was apparently missed when we implemented private streams
with shared history; the correct check is to look at whether the user
can access message history in the stream, which used to be equivalent
to whether it's a private stream.
The problem was that earlier this was just an uncaught JsonableError,
leading to a full traceback getting spammed to the admins.
The prior commit introduced a clear .code for this error on the bouncer
side, meaning the self-hosted server can now detect that and handle it
nicely, by just logging.error about it and also take the opportunity to
adjust the realm.push_notifications_... flags.
This commit removes the stale 'email_gateway' parameter
from 'do_send_messages' function.
This should have been removed in 6c473ed75f,
when the call to 'build_message_send_dict' was removed
from 'do_send_messages'.
This error message didn’t make sense for the check as written, and our
OpenAPI document already provides the expected format for our 200
responses.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Real requests would not validate against the previous version. There
seems to be no consistent way to determine whether a string parameter
should be coerced to an integer for validation against an allOf
schema (which works at the level of JSON objects, not strings).
See also https://github.com/python-openapi/openapi-core/issues/698.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The endpoint was lacking validation that the authentication_methods dict
submitted by the user made sense. So e.g. it allowed submitting a
nonsense key like NoSuchBackend or modifying the realm's configured
authentication methods for a backend that's not enabled on the server,
which should not be allowed.
Both were ultimately harmless, because:
1. Submitting NoSuchBackend would luckily just trigger a KeyError inside
the transaction.atomic() block in do_set_realm_authentication_methods
so it would actually roll back the database changes it was trying to
make. So this couldn't actually create some weird
RealmAuthenticationMethod entries.
2. Silently enabling or disabling e.g. GitHub for a realm when GitHub
isn't enabled on the server doesn't really change anything. And this
action is only available to the realm's admins to begin with, so
there's no attack vector here.
test_supported_backends_only_updated wasn't actually testing anything,
because the state it was asserting:
```
self.assertFalse(github_auth_enabled(realm))
self.assertTrue(dev_auth_enabled(realm))
self.assertFalse(password_auth_enabled(realm))
```
matched the desired state submitted to the API...
```
result = self.client_patch(
"/json/realm",
{
"authentication_methods": orjson.dumps(
{"Email": False, "Dev": True, "GitHub": False}
).decode()
},
)
```
so we just replace it with a new test that tests the param validation.
- Renames "Bots and integrations" to "Bots overview" everywhere
(sidebar, page title, page URL).
- Adds a copy of /api/integrations-overview (symbolic link) as the
second page in the Bots & integrations section, titled
"Integrations overview".
Fixes#28758.
We use Alertmanager as an aggregation place for example for failing CI pipelines,
and `graph` does not always reflect the source of the alert. It's called `source` originally
and I think it should stay this way.
Creates an incoming webhook integration for Patreon. The main
use case is getting notifications when new patrons sign up.
Fixes#18321.
Co-authored-by: Hari Prashant Bhimaraju <haripb01@gmail.com>
Co-authored-by: Sudipto Mondal <sudipto.mondal1997@gmail.com>
This commit updates the API to check the permission to subscribe other
users while creating multi-use invites. The API will raise error if
the user passes the "stream_ids" parameter (even when it contains only
default streams) and the calling user does not have permission to
subscribe others to streams.
We did not add this before as we only allowed admins to create
multiuse invites, but now we have added a setting which can be used
to allow users with other roles as well to create multiuse invites.
Extends the description of the authentication_methods realm setting
in the /api/get-events and /api/register-queue endpoints to clarify
the recommended use of the object is for implementing server settings
UI, and to note the data returned by the /api/server-settings
endpoint should be used for implementing authentication UI.
It is possible to have multiple users with the same email address --
for instance, when two users are guests in shared channels via two
different other Slack instances.
Combine those Slack user-ids into one Zulip user, by their user-id;
otherwise, we run into problems during import due to duplicate keys.
1e5c49ad82 added support for shared channels -- but some users may
only currently exist in DMs or MPIMs, and not in channel membership.
Walk the list of MPIM subscriptions and messages, as well as DM users,
and add any such users to the set of mirror dummy users.
This leads to significant speedups. In a test, with 100 random unique
event classes, the old code processed a batch of 100 rows (on average
66-ish unique in the batch) in 0.45 seconds. Doing this in a single
query processes the same batch in 0.0076 seconds.
The previous query suffered from bad corner cases when the user had
received a large number of direct messages but sent very few,
comparatively. This mean that the first half of the UNION would
retrieve a very large number of UserMessage rows, requiring fetching a
large number of Message rows, merely to throw them away upon
determining that the recipient was the current user.
Instead of merging two queries of "last 1k received" + "last 1k sent",
we instead make better use of the UserMessage rows to find "last 1k
sent or received." This may change the list of recipients, as large
disparities in sent/received messages may result in pushing the
most-recently-sent users off of the list. These are likely uncommon
edge cases, however -- and the disparity is the whole reason for the
performance problem.
This also provides more correct answers. In the case where a user's
1001'th message sent was to person A today, but my most recent message
received was from them yesterday, the previous plan would show the
message I received yesterday message-id as the max, and not the more
recent message I sent today.
While we could theoretically raise the `RECENT_CONVERSATIONS_LIMIT` to
more frequently match the same recipient list as previously, this
increases the cost of the most common cases unreasonably. With a
1000-message limit, the common cases are slightly faster, and the tail
latencies are very much improved; raising `RECENT_CONVERSATIONS_LIMIT`
would increase the result similarity to the old algorithm, at the cost
of the p50 and p75.
| | Old | New |
| ------ | ------- | ------- |
| Mean | 0.05287 | 0.02520 |
| p50 | 0.00695 | 0.00556 |
| p75 | 0.05592 | 0.03351 |
| p90 | 0.14645 | 0.08026 |
| p95 | 0.20181 | 0.10906 |
| p99 | 0.30691 | 0.16014 |
| p99.9 | 0.57894 | 0.19521 |
| max | 22.0610 | 0.22184 |
On the whole, however, the much more bounded worst case are worth the
small changes to the resultset.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
Earlier, after a successful POST request on find accounts page
users were redirected to a URL with the emails (submitted via form)
as URL parameters. Those raw emails in the URL were used to
display on a template.
We no longer redirect to such a URL; instead, we directly render
a template with emails passed as a context variable.
Fixes part of #3128
When you click "Plan management", the desktop app opens
/self-hosted-billing/ in your browser immediately. So that works badly
if you're already logged into another account in the browser, since that
session will be used and it may be for a different user account than in
the desktop app, causing unintended behavior.
The solution is to replace the on click behavior for "Plan management"
in the desktop app case, to instead make a request to a new endpoint
/json/self-hosted-billing, which provides the billing access url in a
json response. The desktop app takes that URL and window.open()s it (in
the browser). And so a remote billing session for the intended user will
be obtained.
As explained in the comment, this is to prevent bugs where some strange
combination of codepaths could end up calling do_login without basic
validation of e.g. the subdomain. The usefulness of this will be
extended with the upcoming commit to add the ability to configure custom
code to wrap authenticate() calls in. This will help ensure that some
codepaths don't slip by the mechanism, ending up logging in a user
without the chance for the custom wrapper to run its code.
This test is ancient and patches so much that it's almost unreadable,
while being redundant considering we have comprehensive tests via the
SocialAuthBase subclasses. The one missing case was the one with the
backend we disabled. We replace that with a proper
test_social_auth_backend_disabled test in SocialAuthBase.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
Rename and restructure these comparison variables such that we don't
have a possibly impossible case for presence.last_connected_time being
None.
Fixes#25498.
We previously created the connection to the outgoing email server when
the EmailSendingWorker was first created. Since creating the
connection can fail (e.g. because of firewalls or typos in the
hostname), this can cause the `QueueProcessingWorker` creation to
raise an exception. In multi-threaded mode, exceptions in the worker
threads which are _not_ during the handling of a specific event
percolate out to `log_and_exit_if_exception` and trigger the
termination of the entire process -- stopping all worker threads from
making forward progress.
Contain the blast radius of misconfigured email servers by deferring
the opening of the connection until it is first needed. This will not
cause any overall performance change, since it only affects the
latency of the very first email after startup.
Creating the QueueProcessingWorker objects when the ThreadedWorker is
created can lead to a race which caused confusing error messages:
1. A thread tries to call `self.worker = get_worker()`
2. This call raises an exception, which is caught by
`log_and_exit_if_exception`
3. `log_and_exit_if_exception` sends our process a SIGUSR1, _but
otherwise swallows the error_.
4. The thread's `.run()` is called, which tries to access
`self.worker`, which was never set, and throws another exception.
5. The process handles the SIGUSR1, restarting.
Move the creation of the worker to when it is started, so the worker
object does not need to be stored, and possibly have a decoupled
failure.
Switches from Django's default error page to Zulip standard error
template. Also updates template for 405 error code to not use the 404
art.
Fixes#25626.
By default, `SELECT FOR UPDATE` will also lock any rows which are
`JOIN`ed into the selected rows; in the case of UserMessage rows, this
can mean arbitrary Message rows.
Since the messages themselves are not being changed, it is not
necessary to lock them -- and doing so may lead to deadlocks, in the
case that the UserMessage row is locked for update before the Message,
and some other request has already taken a read lock on the Message
and is blocked on the UserMessage write lock.
Change `select_for_update_query` to explicitly only lock UserMessage.
Updates title and main description to follow the general style
of the API endpoint documentation.
Updates `token` description to clarify suggested mobile client
behavior.
Adds a set of excluded endpoints for the test of generated curl
examples in the API documentation.
Currently, only the `api/test-notify` endpoint is excluded since
there would need to be a push notification bouncer set up to test
that generated curl example.
We return expected_end_timestamp as "None" for the plans to be
downgraded if number of users is not more than MAX_USERS_WITHOUT_PLAN
since they will be downgraded to self-managed plan and would
have push notifications enabled.
Requests to these endpoint are about a specified user, and therefore
also have a notion of the RemoteRealm for these requests. Until now
these endpoints weren't getting the realm_uuid value, because it wasn't
used - but now it is needed for updating .last_request_datetime on the
RemoteRealm.
For the RemoteRealm case, we can only set this in endpoints where the
remote server sends us the realm_uuid. So we're missing that for the
endpoints:
- remotes/push/unregister and remotes/push/unregister/all
- remotes/push/test_notification
This should be added in a follow-up commit.
os.path.getmtime needs to be mock.patched or otherwise the success of
the test depends on the filesystem state and breaks if version.py hasn't
been modified in a while.
`<time:1234567890123>` causes a "signed integer is greater than
maximum" exception from dateutil.parser; datetime also cannot handle
it ("year 41091 is out of range") but that is a ValueError which is
already caught.
Catch the OverflowError thrown by dateutil.
boto3 has two different modalities of making API calls -- through
resources, and through clients. Resources are a higher-level
abstraction, and thus more generally useful, but some APIs are only
accessible through clients. It is possible to get to a client object
from a resource, but not vice versa.
Use `get_bucket(...).meta.client` when we need direct access to the
client object for more complex API calls; this lets all of the
configuration for how to access S3 to sit within `get_bucket`. Client
objects are not bound to only one bucket, but we get to them based on
the bucket we will be interacting with, for clarity.
We removed the cached session object, as it serves no real purpose.
e883ab057f started caching the boto client, which we had identified
as slow call. e883ab057f went further, calling
`get_boto_client().generate_presigned_url()` once and caching that
result.
This makes the inner cache on the client useless. Remove it.
Adds a support action for updating the minimum licenses on a
customer object once a default discount has also been set.
In the case that the current billing entity has a current active
plan or a scheduled upgrade to a new plan, then the minimum
licenses will not be updated.
This protects us from incorrectly handling situations where someone
tested and upgrade to 8.0 for a backup on a separate hostname, and
left the test system live while upgrading the main system, in a way
that results in duplicate RemoteRealm objects that are all marked as
locally deleted.
Further word is required to figure out how to avoid the original
duplication problem.
If we `.distinct("delivery_email")` then we must also
`.order_by("delivery_email")`; adc987dc43 added the `.order_by`
call, which broke the newsletter codepath, since it did not contain
the `delivery_email` in the ordering fields.
Add a flag to distinct on emails in `send_custom_email`.
The set of `enable_marketing_emails=True` are those that have opted
into getting marketing newsletter emails -- but we previously limited
further to only those users active in the last month.
Broaden that to "opted in, and either recently active or an owner or
an admin," with the goal of providing information to folks who may
have tried out Zulip in the past.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Earlier, 'topic' parameter length for
'/users/me/subscriptions/muted_topics' and '/user_topics' endpoints
were not validated before DB operations which resulted in exception:
'DataError: value too long for type character varying(60)'.
This commit adds validation for the topic name length to be
capped at 'max_topic_length' characters.
The doc is updated to suggest clients that the topic name should
have a maximum length of 'max_topic_length'.
Fixes#27796.
Old RemotePushDeviceTokens were created without this attribute. But when
processing a notification, if we have remote_realm, we can take the
opportunity to to set this for all the registrations for this user.
This moves the function which computes can_push and
expected_end_timestamp outside RemoteRealmBillingSession
because we might use this function for RemoteZulipServer
as well and also renames it.
For remote servers, we cannot advertise `List-Unsubscribe=One-Click`,
which is specified in RFC 8058[^1] to mean that the `List-Unsubscribe`
URL supports a POST request with no arguments to unsubscribe. Because
we show an interstitial and confirmation page, as this is not just a
mailing list which is disabled if you click the link, it does not
support the mail system performing the unsubscribe for the user.
Remove the inaccurate header for remote servers.
[^1]: https://datatracker.ietf.org/doc/html/rfc8058
612f2c73d6 started passing add_context to
`send_custom_server_email`, but did not make it make use of it.
Also add the `hostname` as a built-in value, since that is most likely
the most useful property.
This fixes the exception case on the initial
`/api/v1/remotes/server/analytics/status` case. Other exceptions from
`send_to_push_bouncer` are allowed to escape.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Previously, passing a url longer than 200 characters for
jitsi_server_url caused a low-level failure at DB level. This
commit adds this restriction at API level.
Fixes part of #27355.
While the query parameter is properly excaped when inlined into the
template (and thus is not an XSS), it can still produce content which
misleads the user via carefully-crafted query parameter.
Validate that the parameter looks like an email address.
Thanks to jinjo2 for reporting this, via HackerOne.
We previously used get_accessible_user_ids to check whether the
sender can access all DM recipients, which was not efficient as
it queries the Message table. This commit updates the code to
make sure we use get_inaccessible_user_ids which is much more
efficient as it limits the queries to only DM recipients and
also queries the Message table only if needed.
This can still be optimized further as mentioned in #27835 but
this commit is a nice first step.
Saying `**options: str` is a lie, since it contains bools. We pluck
out the two bools that we need properly typed because we will be
pushing them into function calls, and type them explicitly as bools.
As premonitioned in c741c527d7, it is
indeed possible for `get_handler_by_id` to error out by cause the
handler has been unset elsewhere.
Protect the callsites of `get_handler_by_id` to be able to gracefully
handle when the handler has already done away.
This fixes a bug introduced in
6f93ab72c0 where deactivating a realm
would fail with an exception that sessions cannot be cleared inside
database transactions.
If the exception was because the channel closed, attempting to NAK the
events will just raise another error, and is pointless, as the server
already marked the pending events as NAK'd.
4af00f61a8 claimed that `on_finish` and
`on_connection_close` were mutually exclusive. In cases where a
`DELETE` is called on the queue while a longpoll is in progress, this
can cause _both_ to happen:
- The `DELETE` pushes a `cleanup_queue` event, which triggers
`finish_handler` to begin pushing out an empty event response to the
longpoll connection.
- In the midst of that, in an `await`, the longpoll connection drops,
and `on_connection_close` clears the handler.
- The `await` resumes, calls `finish`, and attempts to clear the
handler.
The easiest solution is to make `clear_handler_by_id` tolerant to
multiple attempts to clear it. Since these processes run in parallel,
it means that parts may have a `handler_id` but `get_handler_by_id`
may error in attempting to look it up. We have not observed this in
testing, and I cannot currently prove it is impossible.
This ensures determinism in these tests doing mock_send.assert_called
with - avoids producing test flakes due to a different order of
retrieval of these objects from the database.
- The server sends the list of registrations it believes to have with
the bouncer.
- The bouncer includes in the response the registrations that it doesn't
actually have and therefore the server should delete.
This commit creates a RealmAuditlog entry with a new event_type
'RealmAuditLog.REALM_IMPORTED' after the realm is reactivated.
It contains user count data (using realm_user_count_by_role)
stored in extra_data.
This helps to have an accurate user count data for the billing
system if someone tries to signup just after doing an import.
This partially reverts 579bdc18f85ea8599c8cf1f53ddb02fd41d97993; it
assumed (based on its documentation) that `on_finish` was called for
all requests, even client-terminated ones. This is not accurate; it
is only called when the request calls `finish`, which only happens for
successful requests. This caused every client-closed connection to
leak a handler (ironically, exactly re-introducing the bug previously
fixed in 12a5a3a6e1).
This behaviour was obscured by the development environment's proxy;
see comment added in the previous commit.
Instead of replacing the `clear_handler_by_id` call into
`ClientDescriptor.disconnect_handler`, we instead place it on
`AsyncDjangoHandler.on_connection_close`. This is more correct for
a few reasons:
- `on_connection_close` will be called if the client goes away during
a request without a client descriptor. If the handler garbage
collection of handlers runs inside the ClientDescriptor, we leak
handlers.
- `disconnect_handler` also runs when successfully sending an event,
which already calls `on_finish`. We avoid double-calling
`clear_handler_by_id` by doing it in two clearly exclusive cases,
`on_finish` and `on_connection_close`.
- It combines the creation and garbage collection logic into one
file, decreasing action at a distance which causes memory leaks.
We call 'send_server_data_to_push_bouncer' just after registering
server for push notification.
This helps to have a current state of the user counts when first
logging in after the RemoteRealm flow.
Actions that change the number of user counts adds a deferred_work
queue processor job immediately update the billing service about your
change.
This helps to avoid having users see stale state for how many
users they have when trying to pay.
This is a rename of the previous
enqueue_register_realm_with_push_bouncer_if_needed but is clearer
about the fact that this will also upload audit logs if available.
Given that most of the use cases for realms-only code path would
really like to upload audit logs too, and the others would likely
produce a better user experience if they upoaded audit logs, we
should just have a single main code path here i.e.
'send_analytics_to_push_bouncer'.
We still only upload usage statistics according to documented
option, and only from the analytics cron job.
The error handling takes place in 'send_analytics_to_push_bouncer'
itself.
This is the only operating editing audit logs not already using a
transaction, and having it do so will simplify an upcoming interface
to be able to assume it is always inside a transaction.
Earlier, it was passing tests because the deffered_work queue
that calls send_realms_only_to_push_bouncer didn't update the
realms propery based on response received from bouncer.
This prep commit removes the invalid "dummy-uuid" used, as any
call to send_realms_only_to_push_bouncer will update realms
properties too.
We return an empty realms array as the realm is created midway in
do_create_realm, so the uuid is not already available. Also, our
intent here is not to verify the behaviour of the
send_realms_only_to_push_bouncer function because we'll have
separate tests for that. Here, we verify that deffered_work event
was sent and eventually it made call to send_to_push_bouncer
with appropriate data.