The list of supported events for filtering itself does not document what
each of the events does. Adding a link to GitHub's documentation would
be a pointer to get people started. But ideally we need to establish a
better system to document the events in general.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Currently, there is a checkbox setting for whether to
"Include realm name in subject of message notification emails".
This commit replaces the checkbox setting with a dropdown
having values: Automatic [default], Always, Never.
The Automatic option includes the realm name if, and only if,
there are multiple Zulip realms associated with the user's email.
Tests are added and(or) modified.
Fixes: #19905.
Removes the notification message that was sent if a stream named
"signups" exists in the `settings.SYSTEM_BOT_REALM`. This was a
undocumented feature that would send a notification message when
a new user registered with a Zulip organization that was hosted
by an admin realm like Zulip Cloud.
This removes two database queries when a new user is created: one
to get the system bot realm and the other to get the notification
bot in said realm.
Note that there are still notification messages sent when a new
organization is registered with the admin realm if the "signups"
stream exists.
This commit refactors 'do_set_user_topic_visibility_policy'
to remove the if/else block and just have a single call to
'set_user_topic_visibility_policy_in_database'.
The branching out behaviour based on the user_topic
visibility_policy is reduced to one place, i.e.,
'set_user_topic_visibility_policy_in_database'.
Updated the title and description in the 'enable-emoticon-translation'
file and renamed the file accordingly. Added a new bullet point for
'time format' in the 'configure-new-user-settings.md' file and updated
the sidebar index by replacing the title 'Use 24-hour time' with
'Change the time format'.
The Django convention is for __repr__ to include the type and __str__
to omit it. In fact its default __repr__ implementation for models
automatically adds a type prefix to __str__, which has resulted in the
type being duplicated:
>>> UserProfile.objects.first()
<UserProfile: <UserProfile: emailgateway@zulip.com <Realm: zulipinternal 1>>>
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While the function which processes the realm registration and
signup remains the same, we use different urls and functions to
call the process so that we can separately track them. This will
help us know the conversion rate of realm registration after
receiving the confirmation link.
This commit refactors the notify_created_user function to
call format_user_row twice with different parameters instead
of modifying the person object returned by format_user_row.
This change makes the code somewhat more easy to understand
than it was before.
dc1eeef30a made the column nullable, with the meaning for null of
"use the current `settings.INVITES_DEFAULT_REALM_DAILY_MAX`."
However, 8a95526ced switched to calling `do_change_plan_type` during
realm creation, which sets `realm.max_invites` based on the plan type,
thus ensuring that no new realms have their `_max_invites` set to
null.
Check `max_invites` instead of `_max_invites`. This requires test
adjustments for the fact that `apply_invite_realm_heuristics` is now
run.
Adds a page to the general api documentation about HTTP headers,
so that information about the special response headers for rate
limits have a more logical location in the docs and so that other
HTTP header information can be shared, such as `User-Agent`
conventions.
Adjusts some text and linking on the rest-error-handling page and
overview page for the REST API for the addition of the HTTP headers
page.
Zulip already has integrations for server-side Sentry integration;
however, it has historically used the Zulip-specific `blueslip`
library for monitoring browser-side errors. However, the latter sends
errors to email, as well optionally to an internal `#errors` stream.
While this is sufficient for low volumes of users, and useful in that
it does not rely on outside services, at higher volumes it is very
difficult to do any analysis or filtering of the errors. Client-side
errors are exceptionally noisy, with many false positives due to
browser extensions or similar, so determining real real errors from a
stream of un-grouped emails or messages in a stream is quite
difficult.
Add a client-side Javascript sentry integration. To provide useful
backtraces, this requires extending the pre-deploy hooks to upload the
source-maps to Sentry. Additional keys are added to the non-public
API of `page_params` to control the DSN, realm identifier, and sample
rates.
b4dd118aa1 changed how the `user_info_str` parsed information out of
the events it received -- but only changed the server errors, not the
browser errors, though both use the same codepath. As a result, all
browser errors since then have been incorrectly marked as being for
anonymous users.
Build and pass in the expected `user` dict into the event.
This commit adds 'visibility_policy' as a
parameter to user_allows_notifications_in_StreamTopic
function.
This adds logic inside the user_allows_notifications_in_StreamTopic
function, to not return False when a stream is muted
but the topic is UNMUTED.
Adds a method `user_id_to_visibility_policy_dict`
to 'StreamTopicTarget' class to fetch
(user_id => visibility_policy) in single db query.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
This commit replaces 'remove_topic_mute' with
'set_user_topic_visibility_policy_in_database' and
updates it to delete UserTopic row with any configured
visibility_policy and not just muting.
In order to support different types of topic visibility policies,
this renames 'add_topic_mute' to
'set_user_topic_visibility_policy_in_database'
and refactors it to accept a parameter 'visibility_policy'.
Create a corresponding UserTopic row for any visibility policy,
not just muting topics.
When a UserTopic row for (user_profile, stream, topic, recipient_id)
exists already, it updates the row with the new visibility_policy.
In the event of a duplicate request, raises a JsonableError.
i.e., new_visibility_policy == existing_visibility_policy.
There is an increase in the database query count in the message-edit
code path.
Reason:
Earlier, 'add_topic_mute' used 'bulk_create' which either
creates or raises IntegrityError -- 1 query.
Now, 'set_user_topic_visibility_policy' uses get_or_create
-- 2 queries in the case of creating new row.
We can't use the previous approach, because now we have to
handle the case of updating the visibility_policy too.
Also, using bulk_* for a single row is not the correct way.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
This commit refactors the existing pattern (real-time usage)
used to assert 'date_muted' in tests.
A fixed value is used at the start of the test to
assert 'date_muted', replacing the timedelta or real-time usage pattern.
Replaces 'do_unmute_topic' with 'do_set_user_topic_visibility_policy'
and associated minor changes.
This change is made to align with the plan to use a single function
'do_set_user_topic_visibility_policy' to manage
user_topic - visibility_policy changes and corresponding event
generation.
This commit is a step in the direction of having a common
function to handle visibility_policy changes and event
generation instead of separate functions for each
visibility policy.
In order to support different types of topic visibility policies,
this renames 'do_topic_mute' to 'do_set_user_topic_visibility_policy'
and refactors it to accept a parameter 'visibility_policy'.
The "add_topic_mute" and "remove_topic_mute" library functions
shouldn't be called directly from tests.
They should instead call "do_mute_topic" and "do_unmute_topic"
The reason being:
Library functions are meant to be internal interfaces
for just changing the database, and shouldn't generally be
called elsewhere.
Creates `MutableJsonResponse` as a subclass of Django's `HttpResponse`
that we can modify for ignored parameters in the response content.
Updates responses to include `ignored_parameters_unsupported` in
the response data through `has_request_variables`. Creates unit
test for this implementation in `test_decorators.py`.
The `method` parameter processed in `rest_dispatch` is not in the
`REQ` framework, so for any tests that pass that parameter, assert
for the ignored parameter with a comment.
Updates OpenAPI documentation for `ignored_parameters_unsupported`
being returned in the JSON success response for all endpoints.
Adds detailed documentation in the error handling article, and
links to that page in relevant locations throughout the API docs.
For the majority of endpoints, the documentation does not include
the array in any examples of return values, and instead links to
the error handling page. The exceptions are the three endpoints
that had previously supported this return value. The changes note
and example for these endpoints is also used in the error
handling page.
Adds `is_webhook_view` boolean field to the RequestNotes class so
that (when implemented) `ignored_parameters_unsupported` feature
is not something that is applied to webhooks.
In commit 8181ec4b56, we removed the `realm_str` as a parameter
for `send_message_backed`. This removes a missed test that included
this as a parameter for that endpoint/function.
Actions like deleting realms may leave unreferenced uploads in the
attachment storage backend.
Fix these by walking the complete contents of the attachment storage
backend, and removing files which are no longer present in the
database. This may take quite some time, as it is necessarily O(n) in
the number of files uploaded to the system.
Updates the Asana documentation, which was a detailed version
of the Zapier documentation with screenshots specifically for
Asana, to instead start with the basic incoming webhook steps
and then point to the general Zapier documentation to complete
the integration.
This will be easier to maintain moving forward in the short
term as ideally we'll migrate to a system that documents all
of the integrations with Zulip that are available via Zapier.
Also, updates the current Zapier documentation to mention
Asana as one of the apps that can be integrated with Zulip.
This commit renames reset_emails_in_zulip_realm function to
reset_email_visibility_to_everyone_in_zulip_realm which makes
it more clear to understand what the function actually does.
This commit also adds a comment explaining what this function
does.
The inital Welcome bot message has an extra section if the user is
joining a demo organization, but the link in that section was not
being formatted correctly. Fixes the formatting so that the link
works.
This already became useless in 6e11754642,
as detailed in the API changelog entry here. At this point, we should
eliminate this param and the weird code around it.
This commit also deletes the associated tests added in
6e11754642, since with realm_str removed,
they make no sense anymore (and actually fail with an OpenAPI error due
to using params not used in the API). Hypothetically they could be
translated to use the subdomain= kwarg, but that also doesn't make
sense, since at that point they'd be just testing the case of a user
making an API request on a different subdomain than their current one
and that's just redundant and already tested generally in
test_decorators.
This leftover variable, as a result of older changes, was just always
set to None. That was fine, because when realm=None reaches
check_message further down the codepath, it just infers from
sender.realm. We want to stop passing None like that though, so let's
just set this to user_profile.realm.
Updates the text and title used when the password reset done page
to work for situations where the user is resetting a forgotten
password and for situation where the user is setting a password
for the first time (e.g. SSO login, demo organizations).
This is the behaviour inherited from Django[^1]. While setting the
password to empty (`email_password = `) in
`/etc/zulip/zulip-secrets.conf` also would suffice, it's unclear what
the user would have been putting into `EMAIL_HOST_USER` in that
context.
Because we previously did not warn when `email_password` was not
present in `zulip-secrets.conf`, having the error message clarify the
correct configuration for disabling SMTP auth is important.
Fixes: #23938.
[^1]: https://docs.djangoproject.com/en/4.1/ref/settings/#std-setting-EMAIL_HOST_USER
`./manage.py import` does not take a tarball; it takes a directory.
Making a separate tarball is a waste of CPU time and disk, as it is
never used.
This was included in the commit of the initial Slack conversion code
in 5b37c5562b and propagated from there into every conversion tool.
Remove the unnecessary tarball creation.
c7d0192755 added the unique constraint on
`user_profile_id,message_id,reaction_type,emoji_code`, but left the
existing constraint on `user_profile_id,message_id,emoji_name`. As
explained in the comment added in 3cd543ee98, `emoji_name` cannot be
trusted to be unique, as it is possible to have an Unicode emoji
reaction and a custom emoji with the same name on a message.
Remove the overly-constraining unique index, now that c7d0192755 has
provided the correct one.
View that handled `PATCH user_groups/<int:user_group_id>` required
both name and description parameters to be passed. Due to this
clients had to pass values for both these parameters even if
one of them was changed.
To resolve this name description parameters to
`PATCH user_groups/<int:user_group_id>` are made optional.
We now allow user to change email_address_visibility during user
signup and it overrides the realm-level default and also overrides
the setting if user import settings from existing account.
We do not show UI to set email_address_visibility during realm
creation.
Fixes#24310.
This commit adds backend code to set email_address_visibility when
registering a new user. The realm-level default and the value of
source profile gets overridden by the value user selected during
signup.
This lets us simplify the long-ish ‘../../static/js’ paths, and will
remove the need for the ‘zrequire’ wrapper.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Ever since we started bundling the app with webpack, there’s been less
and less overlap between our ‘static’ directory (files belonging to
the frontend app) and Django’s interpretation of the ‘static’
directory (files served directly to the web).
Split the app out to its own ‘web’ directory outside of ‘static’, and
remove all the custom collectstatic --ignore rules. This makes it
much clearer what’s actually being served to the web, and what’s being
bundled by webpack. It also shrinks the release tarball by 3%.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is quite a bit faster:
```
%timeit calendar.timegm(now.timetuple())
2.91 µs ± 361 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit int(now.timestamp())
539 ns ± 27 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```
This is particularly important for the presence endpoint, which is a
tight loop of serializing datetimes.
As written, the QOS parameters are (re)set every time ensure_queue is
called, which is every time a message is enqueued. This is wasteful --
particularly QOS parameters only apply for consumers, and setting them
takes a RTT to the server.
Switch to only setting the QOS once, when a connection
is (re)established. In profiling, this reduces the time to call
`queue_json_publish("noop", {})` from 878µs to 150µs.
In the case where a stream existed but had no subscribers, the error
message used to send to the owner always used `stream_name`, which
may have been None.
Switch to using `stream.name` rather than `stream_name` for this case.
This code is called in the hot path when Tornado is processing events.
As such, making this code performant is important. Profiling shows
that a significant portion of the time is spent calling asdict() to
serialize the UserMessageNotificationsData dataclass. In this case
`asdict` does several steps which we do not need, such as attempting
to recurse into its fields, and deepcopy'ing the values of the fields.
In our use case, these add a notable amount of overhead:
```py3
from zerver.tornado.event_queue import UserMessageNotificationsData
from dataclasses import asdict
from timeit import timeit
o = UserMessageNotificationsData(1, False, False, False, False, False, False, False, False, False, False, False)
%timeit asdict(o)
%timeit {**vars(o)}
```
Replace the `asdict` call with a direct access of the fields. We
perform a shallow copy because we do need to modify the resulting
fields.
This commit adds migration to fix extra_data field
of RealmAuditLog objects created on changing
can_remove_subscribers_group setting to add "property"
field since the same event type will now be used for
other group based stream settings that will be added
in future.
We add stream_permission_group_settings object which is
similar to property_types framework used for realm settings.
This commit also adds GroupPermissionSetting dataclass for
defining settings inside stream_permission_group_settings.
We add "do_change_stream_group_based_setting" function which
is called in loop to update all the group-based stream settings
and it is now used to update 'can_remove_subscribers_group'
setting instead of "do_change_can_remove_subscribers_group".
We also change the variable name for event_type field of
RealmAuditLog objects to STREAM_GROUP_BASED_SETTING_CHANGED
since this will be used for all group-based stream settings.
'property' field is also added to extra_data field to identify
the setting for which RealmAuditLog object was created.
We will add a migration in further commits which will add the
property field to existing RealmAuditLog objects created for
changing can_remove_subscribers_group setting.
This old 300s value was meaningfully used in 2 places:
1. In the do_change_user_settings presence_enabled codepath when turning
a user invisible. It doesn't matter there, 140s is just since the
point is to make clients see this user as offline. And 140s is the
threshold used by clients (see the presence.js constant).
2. For calculating whether to set "offline" "status" in
result["presence"]["aggregated"] in get_presence_backend. It's fine
for this to become 140s, since clients shouldn't be looking at the
status value anymore anyway and just do their calculation based on
the timestamps.
This makes use of the new case insensitive UNIQUE index added in the
earlier commit. With that index present, we can now rely solely on the
database to correctly identify duplicates and throw integrity errors as
required.
This will allow us to rely on the database to detect duplicate
`UserTopic`s (with the same `topic_name` with different cases)
and thus correctly throw IntegrityErrors when expected.
This is also important from a correctness point of view, since as
of now, when checking if topic is muted or requesting the backend for
muting a topic, the frontend does not check for case insensitivity.
There might exist duplicate UserTopics (in a case insensitive sense)
which need are removed before creating the new index.
The migration was tested manually using `./manage.py shell`.
In 141b0c4, we added code to handle races caused by duplicate muting
requests. That code can also handle the non-race condition, so we don't
require the first check.
Removes the initial check in `_internal_prep_message` of the length
of the message content because the `check_message` in the try block
will call `normalize_body` on the message content string, which
does a more robust check of the message content (empty string, null
bytes, length). If the message content length exceeds the value of
`settings.MAX_MESSAGE_LENGTH`, then it is truncated based on that
value. Updates associated backend test for these changes.
The removed length check would truncate the message content with a
hard coded value instead of using the value for
`settings.MAX_MESSAGE_LENGTH`.
Also, removes an extraneous comment about removing null bytes. If
there are null bytes in the message content, then `normalize_body`
will raise an error.
Note that the previous check had intentionally reduced any message over
the 10000 character limit to 3900 characters, with the code in
question dating to 2012's 100df7e349.
The 3900 character truncating rule was implemented for incoming emails
with the email gateway, and predated other features to help with
overly long messages (better stripping of email footers via Talon,
introduced in f1f48f305e, and
condensing, introduced in c92d664b44).
While we could preserve that logic if desired, it likely is no longer
a necessary or useful variation from our usual truncation rules.
Updates the descriptions of content parameters (optional and
required) to note that the maximum size of the message content
should be based on the `max_message_length` value returned by
the register endpoint.
Previously these descriptions had a hardcoded value of 10000
bytes as the maximum message size.
Also, updates the description of `max_message_length` to clarify
that the value represents Unicode code points.
The password parameter being passed in the `_do_test` helper
function for `TestAuthenticatedJsonPostViewDecorator` tests was
being ignored, as the user needs to be logged in. Removes the
parameter from the helper function and updates the success test
to use `assert_json_success` instead of just checking the status
code.
Also adds a test case for when a user is not logged in to confirm
that it returns an UnauthorizedError.
This reverts commit 851d68e0fc.
That commit widened how long the transaction is open, which made it
much more likely that after the user was created in the transaction,
and the memcached caches were flushed, some other request will fill
the `get_realm_user_dicts` cache with data which did not include the
new user (because it had not been committed yet).
If a user creation request lost this race, the user would, upon first
request to `/`, get a blank page and a Javascript error:
Unknown user_id in get_by_user_id: 12345
...where 12345 was their own user-id. This error would persist until
the cache expired (in 7 days) or something else expunged it.
Reverting this does not prevent the race, as the post_save hook's call
to flush_user_profile is still in a transaction (and has been since
168f241ff0), and thus leaves the potential race window open.
However, it much shortens the potential window of opportunity, and is
a reasonable short-term stopgap.
The Client.name field is only 30 characters long, but there is no
limit to the length of parsed User-Agent value which we may attempt to
store in it. This can cause requests with long user-agents to 500
when the creation of the Client row fails.
Truncate the name at 30 characters for the cache key, and passing
`name` to `get_or_create`.
This will allow us to re-use this logic later, when we add support for
re-checking notification settings just before sending email/push
notifications to the user.
Also, since this is essentially part of the notifiability logic,
this better belongs to `notification_data.py` and this change will
hopefully reduce the reading complexity of the message-send codepath.
This commits update the code to use user-level email_address_visibility
setting instead of realm-level to set or update the value of UserProfile.email
field and to send the emails to clients.
Major changes are -
- UserProfile.email field is set while creating the user according to
RealmUserDefault.email_address_visbility.
- UserProfile.email field is updated according to change in the setting.
- 'email_address_visibility' is added to person objects in user add event
and in avatar change event.
- client_gravatar can be different for different users when computing
avatar_url for messages and user objects since email available to clients
is dependent on user-level setting.
- For bots, email_address_visibility is set to EVERYONE while creating
them irrespective of realm-default value.
- Test changes are basically setting user-level setting instead of realm
setting and modifying the checks accordingly.
Previously, user objects contained delivery_email field
only when user had access to real email. Also, delivery_email
was not present if visibility setting is set to "everyone"
as email field was itself set to real email.
This commit changes the code to pass "delivery_email" field
always in the user objects with its value being "None" if
user does not have access to real email and real email otherwise.
The "delivery_email" field value is None for logged-out users.
For bots, the "delivery_email" is always set to real email
irrespective of email_address_visibility setting.
Also, since user has access to real email if visibility is set
to "everyone", "delivery_email" field is passed in that case
too.
There is no change in email field and it is same as before.
This commit also adds code to send event to update delivery_email
field when email_address_visibility setting changes to all the
users whose access to emails changes and also changes the code to
send event on changing delivery_email to users who have access
to email.
This is helpful for debugging -- generally these tasks are in a worker
queue because they take a long time to run, so knowing what long task
is about to start before it does, rather than just after, is useful.
This commit adds time restriction on moving messages between streams
using the move_messages_between_streams_limit_seconds setting in the
backend. There is no time limit for admins and moderators.
We now use the newly added move_messages_within_stream_limit_seconds
setting to check for how long the user can edit the topic replacing
the previously used 3-day limit. As it was previously, there is no
time limit for admins and moderators.
This commit renames parse_message_content_edit_or_delete_limit
to parse_message_time_limit_setting and also renames
MESSAGE_CONTENT_EDIT_OR_DELETE_LIMIT_SPECIAL_VALUES_MAP to
MESSAGE_TIME_LIMIT_SETTING_SPECIAL_VALUES_MAP.
We do this change since this function and object will also be
used for message move limit and it makes sense to have a more
generic name.
This commit extracts a function to parse message time limit type settings
and to set it if the new setting value is None.
This function is currently used for message_content_edit_limit_seconds and
message_content_delete_limit_seconds settings and will be used for
message_move_limit_seconds setting to be added in further commits.
In Zulip, message topics are case-insensitive but case-preserving.
The `get_context_for_message` function erroneously did a
case-sensitive search, and thus only messages whose topic matched
exactly were pulled in as context.
Make the missed-message pipeline aware that message topics are not
case-sensitive. This means that, when collapsing adjacent messages,
we merge messages with topic headers which are "different"; create a
separate explicit "grouping" to know which to collapse.
Similar to the previous commit, Django was responsible for setting the
Content-Disposition based on the filename, whereas the Content-Type
was set by nginx based on the filename. This difference is not
exploitable, as even if they somehow disagreed with Django's expected
Content-Type, nginx will only ever respond with Content-Types found in
`uploads.types` -- none of which are unsafe for user-supplied content.
However, for consistency, have Django provide both Content-Type and
Content-Disposition headers.
The Content-Type of user-provided uploads was provided by the browser
at initial upload time, and stored in S3; however, 04cf68b45e
switched to determining the Content-Disposition merely from the
filename. This makes uploads vulnerable to a stored XSS, wherein a
file uploaded with a content-type of `text/html` and an extension of
`.png` would be served to browsers as `Content-Disposition: inline`,
which is unsafe.
The `Content-Security-Policy` headers in the previous commit mitigate
this, but only for browsers which support them.
Revert parts of 04cf68b45e, specifically by allowing S3 to provide
the Content-Disposition header, and using the
`ResponseContentDisposition` argument when necessary to override it to
`attachment`. Because we expect S3 responses to vary based on this
argument, we include it in the cache key; since the query parameter
has dashes in it, we can't use use the helper `$arg_` variables, and
must parse it from the query parameters manually.
Adding the disposition may decrease the cache hit rate somewhat, but
downloads are infrequent enough that it is unlikely to have a
noticeable effect. We take care to not adjust the cache key for
requests which do not specify the disposition.
In nginx, `location` blocks operate on the _decoded_ URI[^1]:
> The matching is performed against a normalized URI, after decoding
> the text encoded in the “%XX” form
This means that if a user-uploaded file contains characters that are
not URI-safe, the browser encodes them in UTF-8 and then URI-encodes
them -- and nginx decodes them and reassembles the original character
before running the `location ~ ^/...` match. This means that the `$2`
_is not URI-encoded_ and _may contain non-ASCII characters.
When `proxy_pass` is passed a value containing one or more variables,
it does no encoding on that expanded value, assuming that the bytes
are exactly as they should be passed to the upstream. This means that
directly calling `proxy_pass https://$1/$2` would result in sending
high-bit characters to the S3 upstream, which would rightly balk.
However, a longstanding bug in nginx's `set` directive[^2] means that
the following line:
```nginx
set $download_url https://$1/$2;
```
...results in nginx accidentally URI-encoding $1 and $2 when they are
inserted, resulting in a `$download_url` which is suitable to pass to
`proxy_pass`. This bug is only present with numeric capture
variables, not named captures; this is particularly relevant because
numeric captures are easily overridden by additional regexes
elsewhere, as subsequent commits will add.
Fixing this is complicated; nginx does not supply any way to escape
values[^3], besides a third-party module[^4] which is an undue
complication to begin using. The only variable which nginx exposes
which is _not_ un-escaped already is `$request_uri`, which contains
the very original URL sent by the browser -- and thus can't respect
any work done in Django to generate the `X-Accel-Redirect` (e.g., for
`/user_uploads/temporary/` URLs). We also cannot pass these URLs to
nginx via query-parameters, since `$arg_foo` values are not
URI-decoded by nginx, there is no function to do so[^3], and the
values must be URI-encoded because they themselves are URLs with query
parameters.
Extra-URI-encode the path that we pass to the `X-Accel-Redirect`
location, for S3 redirects. We rely on the `location` block
un-escaping that layer, leaving `$s3_hostname` and `$s3_path` as they
were intended in Django.
This works around the nginx bug, with no behaviour change.
[^1]: http://nginx.org/en/docs/http/ngx_http_core_module.html#location
[^2]: https://trac.nginx.org/nginx/ticket/348
[^3]: https://trac.nginx.org/nginx/ticket/52
[^4]: https://github.com/openresty/set-misc-nginx-module#set_escape_uri
Fixes the documentation generated from the Markdown macros
{settings_tab|your-bots} and {settings_tab|bot-list-admin} to
match the text labels in the Zulip UI and improves the text of
relative links to explicitly say if we are referring to the Bots
tab of the Personal or Organization settings menu.
Follow-up to #23256.
This code needs to be more flexible to improve the documentation
of items in the Personal and Organization settings menu when
using the `{settings_tab|[setting-name]}` Markdownm macro that
provides relative links or step-by-step instructions.
This commit moves the Markdown formatting code to a new function that
receives tuples from `link_mapping` as input. This is a preliminary
step to offer more flexibility than the current approach.
Rename 'muting.py' to 'user_mutes.py' because it, now
, contains only user-mute related functions.
Includes minor refactoring needed after renaming the file.
This commit moves topic related stuff i.e. topic muting functions
to a separate file 'views/user_topics.py'.
'views/muting.py' contains functions related to user-mutes only.
This will help us track if users actually clicked on the
email confirmation link while creating a new organization.
Replaced all the `reder` calls in `accounts_register` with
`TemplateResponse` to comply with `add_google_analytics`
decorator.
When 'resolve|unresolve' and 'move stream' actions occurs in
the same api call, 'This topic was marked as resolved|unresolved'
notification is not sent.
Both 'topic moved' and 'topic resolved' notification should be generated.
This commit updates the logic of when and where to send
'topic resolve|unresolve' notification. Unlike previous logic, notification
may be sent even in the case 'new_stream' is not None.
In general, 'topic resolved|unresolved' notification is sent to
'stream_being_edited'. In this particular case ('new_stream' is not None),
notification is sent to the 'new_stream' after check.
Test case is included.
Fixes: #22973
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This will be useful for re-use for implementation of an endpoint for
obtaining the API by submitting a JWT in the next commits.
It's not a pure refactor, as it requires some tweaks to remote_user_jwt
behavior:
1. The expected format of the request is changed a bit. It used to
expect "user" and "realm" keys, from which the intended email was
just generated by joining with @. Now it just expects "email"
straight-up. The prior design was a bt strange to begin with, so this
might be an improvement actually.
2. In the case of the codepath of new user signup, this will no longer
pre-populate the Full Name in the registration form with the value
from the "user" key. This should be a very minor lost of
functionality, because the "user" value was not going to be a proper
Full Name anyway. This functionality can be restored in a future
commit if desired.
This is an API change, but this endpoint is nearly unused as far as
we're aware.
- Updates `.prettierignore` for the new directory.
- Updates any reference to the API documentation directory for
markdown files to be `api_docs/` instead of `zerver/api/`.
- Removes a reference link from `docs/documentation/api.md` that
hasn't referenced anything in the text since commit 0542c60.
- Update rendering of API documentation for new directory.
Moves the check for calling the `api-doc-template.md` directly,
so that we don't return a 500 error from the server, to happen
earlier with other checks for returning a 404 / missing page.
Also adds a specific test to `zerver/tests/test_urls` for this
template.
Prep commit for moving API documentation directory to be a top
level directory.
- Clean up the language.
- Add a prominent "Go to organization" button.
- Link to guides for new users and admins.
- Fix duplication bug in text email version.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Removes `base_path` argument when making the markdown extension for
parameters in documentation for API endpoints.
This seems to have been originally included for API parameters that
were documented in JSON files, which is no longer in use. Now all
API endpoints in the documentation are documented in
`zerver/openapi/zulip.yaml`.
Removes `base_path` argument when making the markdown extension for
return values in documentation for API endpoints.
This seems to have been a copy and paste error in commit d2ee99a2fd
when `zerver/lib/markdown/api_return_values_generator.py` was created.
Until now, custom emojis with "periods" in their name were allowed, even though
they don't really fit the pattern of how we name them, and in fact the Markdown
processor would not render such custom emoji. Fix this by just disallowing the
character.
Also update the error strings accordingly.
Note that this does not include a migration to eliminate any existing custom emoji with this
character in their name.
Fixes#24066.
This commit refactors the template code for source-realm
select element to have same structure as other inputs
and select element in the page. Thus this change also
makes the styling of source-realm select element consistent
with other select element in the page.
We change the do_create_user function to use transaction.atomic
decorator instead of using with block. Due to this change, all
send_event calls are made inside transaction.on_commit.
Some other changes -
- Remove transaction.atomic decorator from send_inital_realm_messages
since it is now called inside a transaction.
- Made changes in tests which tests message events and notifications
to make sure on_commit callbacks are executed.
This commit changes the do_reactivate_user such that the complete function
is called inside an atomic transaction and events are called after the
transaction is commited using on_commit helper. This is a prep commit
for unsubscribing the bots of unaccessible private streams when reactivating
them.
These files are not Jinja2 templates, so there's no reason that they needed
to be inside `templates/zerver`. Moving them to the top level reflects their
importance and also makes it feel nicer to work on editing the help center content,
without it being unnecessary buried deep in the codebase.
The content of a message is truncated to `MAX_MESSAGE_LENGTH`, which
is 1000 characters. Since the email gateway places attachments at the
very end of the extracted body, that means that they are the first
thing to get truncated off.
That is, if an incoming email message contains 1000 `a`s and an image
attachment, the link that attaches the attachment to the message will
get truncated off, leaving it dangling in the database.
Truncate the message body content separately from the attachment links
which are included at the end of the body.
Changes the check for whether the documentation page is a policy
center page to be the `self.policies_view` boolean instead of the
`path_template` value as it reads much more clearly.
Moves a comment in the code to be contextually relevant.
Because of the overlap with the `DocumentationArticle` dataclass
field `article_path`, we rename the `article_path` variable used
in `MarkdownDirectoryView.get_context_data` for the absolute path
to be `article_absolute_path`.
In commit bbecd41, we added "not_index_page" to the context for
some documentation articles, but use of that context key/value was
removed when the help documentation was removed in commit 1cf7ee9.
Changes `not_index_page` to be a boolean value that's used to set
the page title, but is not then passed on as a context key/value.
Also removes an irrelevant comment about disabling "Back to home"
on the homepage.
Since we want to use `accounts/new/send_confirm` to know how many
users actually register after visiting the register page, we
added it to Google Tag Manager, but GTM tracks every user
registration separately due <email> in the URL
making it harder to track.
To solve this, we want to pass <email> as a GET parameter which
can be easily filtered inside GTM using a RegEx and all the
registrations can be tracked as one.
A missed message email notification, where the message is the welcome
message sent by the welcome bot on account creation, get sent when
the user somehow not focuses the browser tab during account creation.
No missed message email or push notifications should be sent for the
messages generated by the welcome bot.
'internal_send_private_message' accepts a parameter
'disable_external_notifications' and is set to 'True' when the sender
is 'welcome bot'.
A check is introduced in `trivially_should_not_notify`, not to notify
if `disable_external_notifications` is true.
TestCases are updated to include the `disable_external_notifications`
check in the early (False) return patterns of `is_push_notifiable` and
`is_email_notifiable`.
One query reduced for both `test_create_user_with_multiple_streams`
and `test_register`.
Reason: When welcome bot sends message after user creation
`do_send_messages` calls `get_active_presence_idle_user_ids`,
`user_ids` in `get_active_presence_idle_user_ids` remains empty if
`disable_external_notifications` is true because `is_notifiable` returns
false.
`get_active_presence_idle_user_ids` calls `filter_presence_idle_user_ids`
and since the `user_ids` is empty, the query inside the function doesn't
get executed.
MissedMessageHookTest updated.
Fixes: #22884
This commit makes all the parameters after 'content' in
'internal_send_*', 'internal_prep_*' and '_internal_prep_*'
a mandatory keyword argument to increase code readability.
A separate function named `trivially_should_not_notify` is added which
extracts the common checks from `get_push_notification_trigger` and
`get_email_notification_trigger` which are users' notification settings
independent and thus don't depend on what type of notification (email/push)
it is.
608c787c52 fixed a bug where messages sent by the email gateway "as"
a user failed to properly attribute ownership of their attachments,
leaving the attachments orphaned and thus with nobody with permissions
to view them.
These orphaned attachments only remain longer than a few weeks if the
`delete_unclaimed_attachments` script has not been run reliably.
Since there is currently no shipped cron job for this, that is most
likely all deployments.
Add a migration to find such orphaned attachments, and re-attach them
to their original message. While theoretically the attachments
could have been later referenced in other messages -- which would be
very difficult to find and determine if they had access to the
attachment -- we only fix the original message.
In order to make this somewhat performant, we assume that the Message
rows associated with an Attachment made by the email gateway happened
within 5 minutes, since they must have been made during one HTTP
request.
This is complicated by the message potentially having been deleted; in
this case, the Attachment is moved into ArchivedAttachment, so it can
relate to the ArchivedMessage. The many-to-many
`zerver_archivedattachment_messages` relationship table cannot use its
own `id` sequence for the value, since the `id` is re-used when the
row is inserted into the `zerver_attachment_messages` table -- we
instead consume a value from the `id` sequence of the
`zerver_attachment_messages` table.
Documents link to the bot's user card from the bot's name in
Organization settings > Bots, and information in the bot's user card.
Fixes part of #23970.
When the email mirror gateway is sending messages "as" a user (as
triggered by having access to the missed-message email address),
attachments were still created as the Email Gateway bot. Since the
sender (the end-user) was not the owner of those attachments (the
gateway bot), nor were they referenced yet anywhere, this resulted in
the attachments being "orphaned" and not allowed to be accessed by
anyone -- despite the attachment links being embedded in the message.
This was accompanied by the error:
```
WARN [] User 12345 tried to share upload 123/3LkSA4OcoG6OpAknS2I0SFAQ/example.jpf in message 123456, but lacks permission
INFO [zerver.lib.email_mirror] Successfully processed email from user 12345 to example-stream
```
We solve this by creating attachment objects as the users the message
will be sent from.
The intention was to continue the outer ‘for’ loop, not the inner one
(but Python doesn’t have labelled ‘continue’).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The max inline preview limit was previously increased to 10 by #20789.
However, as issue #23624 shows, it's still causing confusion for users
when they include more than 10 links.
Bump this limit up to 24, which is a multiple of the 4 image preview
per line logic.
Overrides the default context `allow_search_engine_indexing` to
always be `False` for `templates/corporate/attribution.html` so
that it does not appear in Google / search engine indexes.
Updates test of documentation pages in `test_docs.py` to have an
option for corporate pages to set this value in the template and
verifies that the meta tag for robots noindex, nofollow is
always in the response.
For descriptive endpoints, such as `/register`, that might raise
Schema Validation errors via `validate_against_openapi_schema`,
omits the OpenAPI schema definition in the error output.
Also omits the error instance definition in the error output
when it is a jsonschema object with over 100 properties. This
means that the test instance for objects, like user settings,
will be printed in the error output, but the test instance for
the entire endpoint will not be printed to the console.
The omitted output can be thousands of lines long making it
difficult to find the initial console output that actually helps
the contributor with debugging.
Adds a section in "Documenting REST API endpoints" about
debugging and understanding these errors that is linked to
in the error console output.
Previously, we got the directory path for all documentation pages
before checking for API method and path information in the OpenAPI
documentation. Instead, we now check the `path_template` is the
API documentation view template before getting the directory path.
Also, changes the confusingly named `article_path` variable, which
overlapped with the DocumentationArticle dataclass `article_path`
field, to now be `api_documentation_path`.
Prep commit for moving the help center documentation to a top level
directory.
Accessing .realm will cause a fetch query from the database if the
attribute hasn't been fetched already earlier in the codepath. That's
completely redundant if we're just comparing realms, and we should only
access .realm_id attribute. This seems to eliminate a query in some
codepaths, which is nice in this performance-sensitive function.
Adds links to the documentation about management commands in the
API documentation for creating users, as well as the `/devtools`
documentation, the GDPR compliance article and the incoming
webhooks tutorial.
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
Moving `/user_avatars/` to being served partially through Django
removes the need for the `no_serve_uploads` nginx reconfiguring when
switching between S3 and local backends. This is important because a
subsequent commit will move S3 attachments to being served through
nginx, which would make `no_serve_uploads` entirely nonsensical of a
name.
Serve the files through Django, with an offload for the actual image
response to an internal nginx route. In development, serve the files
directly in Django.
We do _not_ mark the contents as immutable for caching purposes, since
the path for avatar images is hashed only by their user-id and a salt,
and as such are reused when a user's avatar is updated.
Importing `upload_backend` directly means that in testing it must also
be mocked where it is imported, in order to correctly test the right
backend. Since `get_avatar_url` is part of the public
`ZulipUploadBackend` API, add another helper method to call that.
The `django-sendfile2` module unfortunately only supports a single
`SENDFILE` root path -- an invariant which subsequent commits need to
break. Especially as Zulip only runs with a single webserver, and
thus sendfile backend, the functionality is simple to inline.
It is worth noting that the following headers from the initial Django
response are _preserved_, if present, and sent unmodified to the
client; all other headers are overridden by those supplied by the
internal redirect[^1]:
- Content-Type
- Content-Disposition
- Accept-Ranges
- Set-Cookie
- Cache-Control
- Expires
As such, we explicitly unset the Content-type header to allow nginx to
set it from the static file, but set Content-Disposition and
Cache-Control as we want them to be.
[^1]: https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/
Enforcing a consistent `type` helps us double-check that we're not
playing fast-and-loose with any file paths for local files. As noted
in the comment, this is purely for defense-in-depth.
Passing `write_local_file` a consistent `type` requires removing the
"avatars" out of `realm_avatar_and_logo_path` -- which makes it
consistent across upload backends.
This, in turn, requires a compensatory change to zerver.lib.export, to
be explicit that the realm icons are exported from the avatars
directory. This clarity is likely an improvement.
sendfile already applied a Content-Disposition header, but the
algorithm may provide both `filename=` and `filename*=` values (which
is potentially confusing to clients) and incorrectly slash-escapes
quotes in Unicode strings.
Django provides a correct implementation, but it is only accessible to
FileResponse objects. Since the entire point is to offload the
filehandle handling, we cannot use a FileResponse.
Django 4.2 will make the function available outside of FileResponse.
Until then, extract our own Content-Disposition handling, based on
Django's.
We remove the very verbose comment added in d4360e2287, describing
Content-Disposition headers, as it does not add much.
Add more tests analogous to existing ones but for different scenarios.
This is mostly boring text, but is important for completeness, since the
notificability logic underneath is subtle.
Split the one giant `test_end_to_end_missedmessage_hook` into many
smaller tests.
This allows us to not worry about resetting database state after each
test case and also allows extracting a lot of common stuff into setUp
and tearDown.
There is probably even more scope of deduplication here (for example,
the mock and the `assert_maybe_enqueue_notifications_call_args` call are
same for all test cases) but that might not be worth the added
complexity.
We also change a few
```
user_profile.<setting> = <value>
user_profile.save()
```
expressions to instead use the `do_change_user_setting` function.
For alert words, we currently don't send email/push notifications --
only desktop notifications. Thus, we don't need to consider alert words
here, since desktop notifications do not utilize the presence status
calculated at this stage.
Tested manually that alert word desktop notifications work as expected.
When we implement email/push notifications for alert words (issues #5137
and #13127), we can add new fields like
`notifications_data.alert_word_email_notify`, similar to the existing
`notifications_data.wildcard_mention_email_notify`, which will allow us
to keep the alert word notifiability check inside the dataclass, similar
to how the mentions checks are done currently. So, even when that
feature is implemented, the code which this commit removes would be
unnecessary.
Intercom sends a HEAD request to validate the webhook URL on their side,
which was not expected in the previous implementation.
This fixes the problem that we send out a confusing error message for it.
Fixes#23912.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is a best-effort rendering of the "fields" of Slack incoming
hooks, which Slack renders in two columns. We approximate them in a
Markdown table, with some minor in-place replacements.
Fixes#22228.
`check_text_block` transformed its input, making the object it
returned not the same object it was passed; this invalidated it for
use in `check_list`. It is also, in general, unlike all other
validators.
Make it return a TypedDict cast of its input.
If `invite_as` is passed as a number outside the range of a PostgreSQL
`SMALLINT` field, the database throws an exception. Move this exception
to the glass as a validation error to allow better client-side error
handling and reduce database round-trips.
Updates the help center article to match the style and formatting
of "Import from Slack" and replaces existing content with its
corresponding Markdown macro.
When this code was moved from being in zerver in 21a2fd482e, it kept
the `if ZILENCER_ENABLED` blocks. Since ZILENCER and CORPORATE are
generally either both on or both off, the if statement became
mostly-unnecessary.
However, because tests cannot easily remove elements from
INSTALLED_APPS and re-determine URL resolution, we switch to checking
`if CORPORATE_ENABLED` as a guard, and leave these in-place.
The other side effect of this is that with e54ded49c4, most Zulip
deployments started to 404 requests for `/apps` instead of redirecting
them to `https://zulip.com/apps/` since they no longer had any path
configured for `/apps`. Unfortunately, this URL is in widespread use
in the app (e.g. in links from the Welcome Bot), so we should ensure
that it does successfully redirect.
Add the `/apps` path to `zerver`, but only if not CORPORATE_ENABLED,
so the URLs do not overlap.
‘logging.warning("Naive datetime:", item)’ is an invalid call that
crashes with “TypeError: not all arguments converted during string
formatting”. I take that to mean this check has not been tripped in
the six years it’s been there, and can safely be replaced with an
error.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Some email clients (notably, Gmail Web) support automatically threading
emails together if recipients and subjects match[1]. Manual testing
indicated that prefixing a subject with "[bracketed content]" does not
break this threading behavior, but the added checkmark in a resolved
topic's title does. Before sending an email notification, determine
whether the topic is resolved, and pass this information to the Jinja
template to properly format a threadable email subject.
Fixes: #22538
[1]: https://support.google.com/mail/answer/5900
Previously, stream names and topics (without consideration for their
resolution status) were concatenated in Python-land and passed through
to the template. To more cleanly separate concerns, and to prepare for
accounting for topic resolution status being a third, independent,
component of a subject line, instead pass stream and topic strings
independently to the Jinja template, which can format them as it sees
fit.
Additionally, migrate existing EditMessageTest to use this helper
method, with the side effect of migrating the tested flow from a
/json/messages URL to a /api/v1/messages URL.
This commit renames "can_edit_topic_of_any_message" function
in models.py to "can_move_messages_to_another_topic" and
"user_can_edit_topic_of_any_message" function in settings_data.js
to "user_can_move_messages_to_another_topic".
This change is done since topic editing permission does not
depend on message sender now and messages are considered same
irrespective of whether the user who is editing the topic had sent
the message or not. This also makes the naming consistent with
what we use for the label of this setting in webapp and how we
describe this action in help documentation.
This commit changes the topic edit permssions to not depend whether the user
editing the message had sent the message or it was sent by someone else.
We only do backend changes in this commit and frontend changes will be done
in further commits.
Previously, we always allowed topic edits when the user themseleves had
sent the message not considering the edit_topic_policy and the 3-day time
limit. But now we consider all messages as same and editing is allowed only
according to edit_topic_policy setting and the time limit of 3 days in
addition for users who are not admins or moderators.
We change the topic and stream edit permssions to not depend on
allow_message_editing setting in the API and are allowed even
if allow_message_editing is set to False based on other settings
like edit_topic_policy and can_move_message_between_streams.
Fixes a part of #21739.
This uses the linkifier index among the list of linkifiers in the
replacement as the priority to order the replacement order for
patterns in the topic. This avoids having multiple overlapping matches
that each produce a link.
The linkifier with the lowest id will be prioritized when its pattern
overlaps with another. Linkifiers are prioritized over raw URLs.
Note that the same algorithm is used for local echoing and the
backend markdown processor.
Fixes#23715.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This explicitly enforces ordering on the linkifiers. This is useful when
there are overlapping linkifier patterns that matches the same text. In
our current linkifier implementation, this order affects how the
patterns are handled in the markdown processor, with the earlier ones
being prioritized.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The same pattern being matched multiple times in a topic cannot be
properly ordered using topic_name.find(match_text) and etc. when there
are multiple matches of the same pattern in the topic.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Moves files in `templates/zerver/help/include` that are used
specifically for API documentation pages to be in a new directory:
`templates/zerver/api/include`.
Adds a boolean parameter to `render_markdown_path` to be used
for help center documentation articles.
Also moves the test file `empty.md` to the new directory since
this is the default directory for these special include macros
that are used in documentation pages.
Moves files in `templates/zerver/help/include` that are used
specifically for integrations documentation to be in a new
directory: `templates/zerver/integrations/include`.
Adds a boolean parameter to `render_markdown_path` to be used
for integrations documentation pages.
Track `create_realm` and `new_realm_send_confirm` using
google analytics.
This will help us track number of users who want to
create a new Zulip organization.
Since the removal of `CurlHttpRequest` in Rabbix 6.2, the old script for
setting up the Zabbix integration no longer works.
https://www.zabbix.com/documentation/6.2/en/manual/installation/upgrade_notes_620?hl=CurlHttpRequest#curlhttprequest-removed
This updates the documentation to use `HttpRequest` instead and keep it
up-to-date with the latest Zabbix server. We raise the minimum supported
version from 5.2 to 5.4 because `HttpRequest` was introduced in 5.4.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
‘exit’ is pulled in for the interactive interpreter as a side effect
of the site module; this can be disabled with python -S and shouldn’t
be relied on.
Also, use the NoReturn type where appropriate.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
As we have seen no further cases of this in production since #23215,
increase the severity to an error, and switch from returning a
list (which is not type-safe if the function declares a QuerySet
return) to returning the QuerySet without caching.
Failing to store the result in the cache, with an error, seems
superior to raising an exception; in both cases the next request will
redo the work, but we are guaranteed a worse user experience if we 500
the request.
Ref https://github.com/zulip/zulip/pull/23215#discussion_r994186493
The previous regular expression required a `[^\w]` at the start and
end of the match. This had two unintended effects -- it meant that it
could never match at the start or end of a string, and it meant
that *adjacent* words required *two* non-word characters between them,
as the pattern matches cannot overlap.
Switch to allowing string start/end to anchor the matches, and make
the trailing `[^\w]` be a zero-width look-ahead, to allow the patterns
to overlap. Also remove the spurious `^` within the inner character
classes, which prevented `*foo^bar*` from matching. Finally, add
tests to cover the functionality, which was previously untested.
Updates the Hello World integration documentation and the section
of the related tutorial on documenting the example integration
for the currently used shared macro `create-bot-construct-url.md`.
Also, updates them to use the numbered style currently used in
the majority of the integrations documentation pages.
remove_user_from_user_group's only caller has been removed in 271333301d.
Its usage has been superseded by remove_members_from_user_group.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Doing rapid pace mark-as-unread in the Zulip web application, one
observed assertion failures showing that the server would send an
event containing multiple message IDs but only one of the messages
present in the message_details side data structure.
The cause of this was the "virtual events" compression system; two
flags/remove/read events were being combined by simply concatenating
the lists of events, without any attempt to merge the
`message_details` field on those objects.
The immediate fix is to disable virtual events compression for this
event class, but it's not unlikely we'll need to just eliminate the
virtual_events system entirely, because it seems difficult to make it
soundly handle a message whose state for a given flag changes back and
forth while the client is offline.
But we'll leave that for later, since removing that optimization
deserves more discussion than fixing this event corruption bug.
The naive solution #23465 creates situations where the same user can have
multiple reactions as the base emojis are not unique, e.g. +1::skin2
and +1::skin4 would both reduce to +1 but the userlists are separate.
This solution handles the reduction, merges the same-base reactions,
and deduplicates the userlist.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Co-authored-by: rht <rhtbot@protonmail.com>
return inside finally blocks causes exceptions to be silenced.
Although these blocks follow blanket ‘except Exception’ handlers, they
do not seem to have a goal of silencing BaseException and exceptions
thrown by the exception handler, so rewrite them to avoid it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Both of these compatibility blocks can be deleted, since you can't
upgrade directly to any supported release from the versions where the
old event formats would be used.
This solves the problem that resolving a topic with a long name (>60
characters) will cause the topic name to be truncated, and thus the edit
message code path thinks that the topic is being moved in addition to
being resolved.
We store the pre-truncation topic and use it to check against the
original topic when determining whether a topic is being moved while
getting (un)resovled or not.
Fixes#23482
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We intended to send both the "topic was resolved" and the "topic was
moved here" notification when resolving and moving a topic at the same
time in #22312.
The previous implementation did not work as expected and it was only
sending the "topic was moved here" notification.
This removes the check for old_topic and new_topic that have
RESOLVED_TOPIC_PREFIX stripped in maybe_send_resolve_notifications, so
that the notification will be sent regardless if the topic name without
the prefix stays the same or not.
Note that weird topic handling ("✔ ✔✔ some topic") in the comments
was added in e231a03eff is unaffected. In case of confusion, the lstrip
check is not essential to detecting topic being unresolved/resolved.
As we mainly have that handled in the latter part of the helper.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
CircleCI has updated its webhook format[1] for CircleCI Cloud,
Server version 3.x and 4.x. This commit rewrites the CircleCI
integration to parse the new webhook structure. The tests have also
been rewritten for the new format.
With this commit, we support webhooks from projects that use GitHub,
BitBucket and GitLab as VCS providers. The CircleCI integration doc
has been updated to mention the same. The doc has also been updated
with the latest instructions for configuring a webhook on the CircleCI
interface, and the new output screenshots.
References:
[1]: https://circleci.com/docs/webhooks
Previously, emoji.json was read from
"$ZULIP_PATH/node_modules/emoji-datasource-google/emoji.json".
This path doesn't exist in production when installing from scratch from
a release tarball. And so, we ensure emoji.json exists by copying it to
`static/generated/emoji`.
With tweaks to comments by tabbott.
Fixes: #23469
This is a follow-up to d201229df8.
do_get_invites_controlled_by_user queries for Confirmations when finding
multiuse invites controlled by a user. This means that a revoked
multiuse invite cannot really be fetched here, because
do_revoke_multi_use_invite deletes the Confirmation object when revoking
the invitations. However, having a defensive assert here should be
useful to make this doesn't secretly break in the future if the query
used changes or if there are unexpected revoked multiuse invites with an
existing Confirmations for any (buggy) reason.
This allows us to revoke MultiUseInvites by changing their .status
instead of deleting them (which has been deleting the helpful tracking
information on PreregistrationUsers about which MultiUseInvite they came
from).
This commit adds support for Grafana's new alerting system, Grafana
Alerting. The existing Grafana integration has been modified to
detect the version of the notification through the structure of the
payload body, since the the structure varies by version. Support for
legacy alerting is been continued. Example fixtures have been added
for Grafana Alerting's webhooks.
Tests updated.
Fixes#23517.
While this feature was added to Zulip very early, it has been troubled
for most of that time; it never looked great visually, had a lot of
implementation complexity around resize.js, and has a weird model (a
setting that changes the UI only in certain window sizes).
This option is not commonly used; while a significant portion of users
have it enabled, many of them just don't use window sizes where it
actually has an effect. So it's not clear that it will be missed if
removed; we got very few bug reports when it was completely broken for
a few days after we first integrated the new left sidebar private
messages design.
Even with it no longer being broken, it does not work very well with
the addition of the new PMs section in the left sidebar. (Having two
scrollbars in the sidebar looks quite awkward.) The new private
messages section in the left sidebar also addresses some of the use
cases for always keeping the Users list always visible, even in narrow
windows.
This option is only removed from frontend for now. To make this
decision easily reversible, the backend code of this feature
is still kept.
This doesn't make sense if the realm is active and will fail as soon as
do_reactivate_realm is fixed in the next commit to be a noop and not
create confused RealmAuditLog entries when the realm is active.
There was the following bug here:
1. Send an email invite to a user.
2. Have the user sign up via social auth without going through that
invite, meaning either going via a multiuse invite link or just
straight-up Sign up if the org permissions allow.
That resulted in the PreregistrationUser that got generated in step (1)
having 2 Confirmations tied to it - because maybe_send_to_registration
grabbed the object and created a new confirmation link for it. That is a
corrupted state, Confirmation is supposed to be unique.
One could try to do fancy things with checking whether a
PreregistrationUser already have a Confirmation link, but to avoid races
between ConfirmationEmailWorker and maybe_send_to_registration, this
would require taking locks and so on - which gets needlessly
complicated. It's simpler to not have them compete for the same object.
The point of the PreregistrationUser re-use in
maybe_send_to_registration is that if an admin invites a user, setting
their initial streams and role, it'd be an annoying experience if the
user ends up signing up not via the invite and those initial streams
streams etc. don't get set up. But to handle this, we can just copy the
relevant values from the pre-existing prereg_user, rather than re-using
the object itself.
This line was added in 94e099eaab,
presumably because of the
del request.session["multiuse_object_key"]
line that was just above it.
Looks like it should have been removed in
868a763cec, which eliminated that `del`
operation.
This is primarily for administrators needing to provide message
history for compliance or auditing purposes. Search terms can be
pulled from a file, one per line, or from arguments provided on the
command line.
In 1fce1c3c73, we added logic to parse
the User-Agent in /register requests; this logic crashed if an HTTP request
was missing that header.
Includes a test for `/register` with no user agent passed; this should catch
similar regressions in the future.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
For PATCH requests to `/realm/profile_fields`, the field_type is
determined via the field.id, which is in the URL. Therefore,
`field_type` does not need to be passed as part of the request
data in tests.
We do not create historical UserMessage rows, for messages that didn't
have one, while marking messages as read and simply ignore those messages.
We do so because there is no user of creating UserMessage rows and it just
wastes storage.
Note that we still allow to mark messages from unsubscribed streams as
read but only those which have UserMessage rows for them to handle the
case when the unread messages were not marked as read while unsubscribing
from the stream due to some race condition. In such cases, messages
will not be included in the unread count shown in "All messages" menu
(and stream is anyways not present in the left sidebar), but the message
border on the left is green if viewing the stream after unsusbcribing it.
So, to avoid the confusion for users, the messages will be marked as read
when user scrolls down.
This unifies the length of the shortened SHA our integrations generate,
and ensures that they are long enough for projects of various sizes with
a chosen value defined in get_short_sha.
Fixes#23475
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
7 characters are not enough for large projects, so we change
it to reasonably longer. As an example, The Linux kernel needs
at least 11 characters of sha in its shortened form to identify
a revision. We pick 11 so it should work for most of the projects.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We use `templates/zerver/api/api-doc-template.md` as a base template
for the documented API endpoints in `zerver/openapi/zulip.yaml`.
Previously, if this template was called as an endpoint, then it
would fail an assertion check and send server error. Now we check
for specifically for that potential path and return a 404 error
response for no existing article.
Fixes#21876.
Prior to 53231aa, the `ignore_unhashable_lru_cache` decorator had
a check for the development environment so that changes could be
seen on refresh.
Puts that check back in IgnoreUnhashableLruCacheWrapper class.
In the outgoing webhook handler, there is potentially several seconds
of trying between when a message triggering an outgoing webhook
arrives, and when it fails. In the meantime, the stream the
triggering message was on may have been deleted, causing the
"Failure!" message to have no valid stream to be sent to.
Rather than raise an exception in the outgoing webhook worker, ignore
the exception and move on.
The lambda passed to `queue_json_publish` is used if
`settings.USING_RABBITMQ` is unset -- which is only true in tests. As
such, this pattern causes failures to never actually retry within
tests.
This behaviour has existed ever since the outgoing webhook code was
introduced in 53a8b2ac87, with no explanation. Not passing that
argument allows tests to verify the retry behaviour when webhooks
fail.
Previously, test cases or clients accessing /json/ views using HTTP
Basic Auth would be accepted, while we intended to only allow clients
authenticated with a session cookie to access these views.
This adds a check on the accessed path to avoid this possibility.
It seems unlikely that any API clients clients were taking advantage
of this unintended quirk; so we're not going to bother documenting
this bug fix as an API change. In any case, it should be trivial for
anyone affected to consult the documentation and then switch their
/json/foo URL to a correct /api/v1/foo URL.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Mobile clients older than v27.192 do not support PRONOUNS type
custom profile fields, so we instead change the type of it to
SHORT_TEXT in the data sent with register response and also in
the events sent to those clients.
On multi-realm systems this results in traversal of all messages in
all realms and returns a massive payload of 1 row per stream on
the server, not the intended one row per realm.
Zulip's unread messages design has an invariant that all unread stream
messages must be in streams the user is subscribed to. For example, We
do not include the unread messages from unsubscribed streams in the
"unread_msgs" data structure in "/register" response and we mark all
unread messages as read when unsubscribing a user from a stream.
Previously, the mark as unread endpoint allowed violating that
invariant, allowing you to mark messages in any stream as unread.
Doing so caused the "message_details" data structures sent with
"update_message_flags" events to not contain messages from
unsubscribed streams, even though those messages were present in the
set of message IDs. These malformed events, in turn, caused exceptions
in the frontend's processing of such an event.
This change is paired with a separate UI change to not offer the "Mark
as unread" feature in such streams; with just this commit, that will
silently fail.
With some additions to the tests by tabbott.
This guarantees that the Realm is always non-None when we hit the
codepath is_static_or_current_realm_url via
do_change_stream_description, so that we can properly skip rewritting
some images.
Fixes#19405
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The previous error page was inadequate for serving the two different
scenarios where we show errors in realm_creations, in particular
containing a misleading sentence about realm creation being disabled
(even in the case where it was actually enabled and the user simply had
an expired link).
So that we can stop using Tim's photo for tests, adds an open
license profile picture to use instead.
Updates tests that used `tim.png` to use the new example profile
picture, which is located in `static/images/test-images/avatars/`.
Creates `static/images/authentication_backends` directory for icons
of backend authentication methods, which are used on the log-in page.
And updates the example documentation in the API `/server_settings`
endpoint.
Updates the markdown test case that used `zulip-octopus.png` to
instead use an zulip logo that's also referenced in a frontend
puppeteer test, `static/images/logo/zulip-icon-128x128.png`.
The query in display_in_profile_summary_limit_reached should check
realm also since there is per-realm limit of 2 fields, otherwise
this will cause issues where multiple realms are hosted on the
same server.
Fixes#23368.
We no longer need to do the inner joins to figure out the message's
realm and split up the cross-realm and regular case - now we just look
at zerver_message.realm directly.
I don't think this is used anywhere outside of tests, but we should have
this logic correct. If this function is used to send a message from a
user to a cross-realm bot, the message.realm should be the realm of the
user.
In the normal case, where a user send a message to a cross-realm bot
through the API is already handled correctly, this bug is unrelated.
Clarifies most of the narrow parameter descriptions by adding
information about what a user's message history includes, about
new bot users not generally being subscribed to streams, and
about the specific `streams:public` narrow.
Updates the main descriptions for the `/get-messages` and
`/check-messages-match-narrow` endpoints.
Fixes#19477.
This commit updates the urls for personal narrow sent in email
notifications to be of form "{user_id}-{encoded_full_name}" to
make it consistent with the urls that we use for such narrows
in webapp which were recently updated in b4eddad for improving
performance. We encode the full name in the same way that we do in
webapp by replacing the url characters encoded by browser with "-".
It doesn't seem to make sense to append _{number} to the status code in
that arg, because the resulting string stops looking like a status code
and actually makes this test fail in the follow-up commit with the
confusing error message of
Unknown response http status: 2000
So this just seems like a bug.
Updates the hash used for the recent conversations view to be
"#recent" instead of "#recent_topics".
We will need to keep the logic for handling "#recent_topics"
permanently because users potentially have messages from
Welcome Bot with links to that hash.
Including "recent_topics" as a web_public_allowed_hash in
hash_util.js can be changed once self-hosted servers cannot
upgrade directly to Zulip 5.x from the current version.
Fixes#23132.
Output message should talk about both the cases:
actual_count > expected_count and actual_count < expected_count.
The message now includes information for the case where
actual_query_count < expected_query_count.
Fixes: #23325
Replaces instances of "recent topics" in the web-app and documentation
to be "recent conversations".
Renames both `recent-topics.md` files in the help center to be
`recent-conversations.md` and updates/redirects links to new URL.
Does not update instances of "recent topics" in frontend code comments
and does not update the main overview changelog, for now.
Does not change case study text where "recent topics" was referenced
in a quote, but does change generic text references to be "recent
conversations".
This was broken by commit b945aa3443
(#22604), because email_to_domain implicitly lowercased the result.
No adjustment is needed for is_disposable_domain, which already
lowercases its argument.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Renames the help article on custom profile fields to reflect that
its content is not just about adding fields.
Adds a redirect from the old URL to the new URL and updates internal
links, linking to #add-a-custom-profile-field where appropriate.
Fixes#23170.
Adds new tab to `zerver/lib/markdown/tabbed_sections.py` to document
managing bots from both personal settings and organization settings.
Documents adding bots from the organization settings Bots panel.
Separates instructions for deactivating and reactivating a bot from
both personal settings and organization settings.
Fixes a few formatting issues such as missing bold formatting and
heading level.
Fixes: #23066.
On my data (about 10 million messages in 1600 streams) this used to take
about 40 hours, while the improved statement completes in roughly 30
seconds.
The old solution had postgres go through the entire table until the
first match for each stream. Thus, the time spent scanning the table
got longer and longer for each stream because postgres always started at
the beginning (and somehow it did not use any indices) and had to skip
over all rows until it found the first message from the stream that is
was looking for each time.
This new statement just performans a bulk operation, scanning the table
only once and then inserts the results directly into the destination
table.
Slightly more verbose inforation about this change can be found in:
https://chat.zulip.org/#narrow/stream/31-production-help/topic/Import.20Rocketchat.20data/near/1408867
Signed-off-by: Florian Pritz <bluewind@xinu.at>
This adds a helper based on testing patterns of using the "queries_captured"
context manager with "assert_length" to check the number of queries
executed for preventing performance regression.
It explains the rationale of checking the query count through an
"AssertionError" and prints the queries captured as assert_length does,
but with a format optimized for displaying the queries in a more
readable manner.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This adds CapturedQueryDict to provide a more accurate type annotation
for the return value of queries_captured. We also replace "Generator"
with "Iterator" because the latter two type parameters were unused.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We followed a same approach as in #22611 to mark migrations as noops. We
might eventually squash them.
Migration operations that only change the validators should be removed
as they are essentially noops that do not affect the database schema.
However, ./manage.py makemigration still generates a new migration for
validators change regardless. So we still have to keep one migration
that updates the validators to the latest state. We prefer to keep the
earliest one for this purpose.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
These used to only be shown conditional on the
{% if password_auth_enabled %} in the template. Meaning that if you had
an org with email auth disabled and a deactivated user tried to log in,
they wouldn't see the error shown and get confused.
This switches the position of where these error will be shown (above the
login+password form instead of below it), but it looks fine.
Send an empty list of `custom_profile_fields` in `page_params` for
spectators, rather than not sending the field at all.
Also, updates the user info popover to not show the manage user
three-dot menu when in a spectator view.
As noted in the previous commit, this causes bloat in memcached, for
no purpose. Log a warning when `cache_with_key` sees a QuerySet
returned from the function it is decorating.
Storing a QuerySet rather than the list version of the result in it
has a large overhead -- and, as noted by the type annotations, the
result is only ever used as a list. This difference is particularly
important because the cached `get_realm_user_dicts` can get extremely
large for realms with large numbers of users, potentially overflowing
the 1MB default object limit in memcached.
Switch all cases of `cache_with_key` which return QuerySets to
returning the list values of them.
Storing this key is superfluous, as it will be the same for all users,
and definitionally already known to fetch the cache for the realm. It
is also not currently used by the callsites that read rows from the
cache.
9381a3bd45 added support for linkifier pattern URLs containing
`%20`-style escapes, but only did so for the codepath which is used in
the message body -- topic links did not understand them.
Expand the support to include when they are substituted into topics.
Previously we did not send notification for topic-only edits.
Now, we add backend support for sending notification to topic-only
edits as well.
We would add support for this in webapp in further commits since
message edit UI will be updated as well. We just make sure that no
notifications are sent when editing topic using pencil icon in
message header.
We also change the API default for moving a topic to only notify the
new location, not the old one; this matches the current defaults in
the web UI.
Includes many tests.
We also update the puppeteer tests to test only content edit as
we are going to change the UI to not allow topic editing from
message edit UI. Also fixing the existing tests to pass while
doing topic edits is somewhat complex as notification message
is also sent to new topic by default.
Fixes#21712.
Co-authored-by: Aman Agrawal <amanagr@zulip.com>
Co-authored-by: Tim Abbott <tabbott@zulip.com>
The previous commit did this for revoking sessions. send_events should
be handled similarly too, to correctly handle calling do_deactivate_user
inside a transaction.