Prior to commit a9b3a9c, the server implementation for documented
search operators with dashes, also implicitly supported clients
sending those same operators with underscores. This has been the
case sense the server side support for narrow filtering was
introduced in commit 3af2bf345a.
Updates the stricter version of mapping operator strings to `by*`
functions, to also include the underscore version of any operators
that have dashes. Adds a note that these undocumented versions are
tied to the support for the documented versions.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
We previously created RealmAuditLog entries for user notification
settings only. This commit changes the code to create entries for
all user settings. We cannot backfill the entries since we don't
have the data to do that.
When a new realm is created, a notification message is sent to
the realm configured as the settings.SYSTEM_BOT_REALM if there
is a "signups" stream that exists in that realm. This is used
for Zulip Cloud, but is an undocumented feature.
The topic of the message has been the subdomain of the new realm,
and the message content has been "Signups enabled" translated
into the default language of the new realm.
In order to make these messages more explicitly for Zulip Cloud,
the settings.CORPORATE_ENABLED is checked before sending these
messages.
To make these messages more useful, the topic for these
notifications is changed to be "new organizations". The content
of these messages is updated to have the new realm name (with a
link to the admin realm's activity support page for the realm),
subdomain (with a link to the realm), and organization type.
Use the built-in HTML escaping of Markup("…{var}…").format(), in order
to allow Semgrep to detect mistakes like Markup("…{var}…".format())
and Markup(f"…{var}…").
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Updates the logic for identifying the method to use to extend the
query for the given term from a narrow to use a dictionary that
maps the operator string to the by_* method in the NarrowBuilder
class.
Previously, the by_* method was determined by building a string
based on the operator string and replacing dashes with underscores.
This is a follow up to #24673, we want to modify every webhook
events to follow the same pattern and consistency where branch name
should only show on opened and merged events.
The RealmCount statistics will be empty if the realm was created since
the last daily aggregation. In cases where the daily stats have no
rows, it is likely fast enough to do the real count in the messages
table. This stops unduly penalizing folks who have actually sent
messages, and are just inviting people within the first day.
Prior to aa032bf62c, QOS prefetch was set on every `publish` and
before every `start_json_consumer` -- which had a large and
unnecessary effect on publishing rates, which don't care about the
prefetch QOS settings at all, much less re-setting them before every
publish.
Unfortunately, that change had the effect of causing prefetch settings
to almost never be respected -- since the configuration happened in
`ensure_queue`s re-check that the connection was still live. The
initial connection is established in `__init__` via `_connect`, and
the consumer only calls `ensure_queue` once, before setting up the
consumer.
Having no prefetch value set causes an unbounded prefetch; this
manifests itself as the server attempting to shove every event down to
the worker as soon as it starts consuming; if the client cannot keep
up, the server closes the connection. The worker observes the
connection has been shut down, and restarts. While this does make
forward progress, it causes large queues to make progress more slowly,
as they suffer from sporadic restarts.
Shift the QOS configuration to when the connection is set up, which is
a more sensible place for it in general -- and ensures that it is set
on consumers and producers alike, but only once per connection
establishment.
`render_markdown_path` renders Markdown, and also (since baff121115)
runs Jinja2 on the resulting HTML.
The `pure_markdown` flag was added in 0a99fa2fd6, and did two
things: retried the path directly in the filesystem if it wasn't found
by the Jinja2 resolver, and also skipped the subsequent Jinja2
templating step (regardless of where the content was found). In this
context, the name `pure_markdown` made some sense. The only two
callsites were the TOS and privacy policy renders, which might have
had user-supplied arbitrary paths, and we wished to handle absolute
paths in addition to ones inside `templates/`.
Unfortunately, the follow-up of 01bd55bbcb did not refactor the
logic -- it changed it, by making `pure_markdown` only do the former
of the two behaviors. Passing `pure_markdown=True` after that commit
still caused it to always run Jinja2, but allowed it to look elsewhere
in the filesystem.
This set the stage for calls, such as the one introduced in
dedea23745, which passed both a context for Jinja2, as well as
`pure_markdown=True` implying that Jinja2 was not to be used.
Split the two previous behaviors of the `pure_markdown` flag, and use
pre-existing data to control them, rather than an explicit flag. For
handling policy information which is stored at an absolute path
outside of the template root, we switch to using the template search
path if and only if the path is relative. This also closes the
potential inconsistency based on CWD when `pure_markdown=True` was
passed and the path was relative, not absolute.
Decide whether to run Jinja2 based on if a context is passed in at
all. This restores the behavior in the initial 0a99fa2fd6 where a
call to `rendar_markdown_path` could be made to just render markdown,
and not some other unmentioned and unrelated templating language as
well.
Previously, `QuerySet` does not support isinstance check since it is
defined to be generic in django-stubs. In a recent update, such check is
possible by using `QuerySetAny`, a non-generic alias of `QuerySet`.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This adds tests for more corner cases, in exchange for dropping the
query count tests, which were of dubious utility. It also adds the
time-machine library to mock the current time to test that the limits
do expire.
Previously when Github bot receives an update pull request event,it
will produce the following message:
user updated PR #1 Start writing unit tests from test to main
"from test to main" is improper and causes unnecessary confusion.
These changes will update the logic to remove the phrase from
update events. These changes will also include the org: prefix to
the branch names to keep it consistent with Github and further
reduce confusions on branch names.
Fixes#24536.
This commit updates 'set_user_topic_visibility_policy_in_database'
to not raise an error when deleting a UserTopic row and the user
doesn't have a visibility_policy for the topic yet, or when setting
the visibility_policy to its current value.
Also, it includes the changes to not send unnecessary events
in such cases.
The list of supported events for filtering itself does not document what
each of the events does. Adding a link to GitHub's documentation would
be a pointer to get people started. But ideally we need to establish a
better system to document the events in general.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Currently, there is a checkbox setting for whether to
"Include realm name in subject of message notification emails".
This commit replaces the checkbox setting with a dropdown
having values: Automatic [default], Always, Never.
The Automatic option includes the realm name if, and only if,
there are multiple Zulip realms associated with the user's email.
Tests are added and(or) modified.
Fixes: #19905.
Removes the notification message that was sent if a stream named
"signups" exists in the `settings.SYSTEM_BOT_REALM`. This was a
undocumented feature that would send a notification message when
a new user registered with a Zulip organization that was hosted
by an admin realm like Zulip Cloud.
This removes two database queries when a new user is created: one
to get the system bot realm and the other to get the notification
bot in said realm.
Note that there are still notification messages sent when a new
organization is registered with the admin realm if the "signups"
stream exists.
This commit refactors 'do_set_user_topic_visibility_policy'
to remove the if/else block and just have a single call to
'set_user_topic_visibility_policy_in_database'.
The branching out behaviour based on the user_topic
visibility_policy is reduced to one place, i.e.,
'set_user_topic_visibility_policy_in_database'.
Updated the title and description in the 'enable-emoticon-translation'
file and renamed the file accordingly. Added a new bullet point for
'time format' in the 'configure-new-user-settings.md' file and updated
the sidebar index by replacing the title 'Use 24-hour time' with
'Change the time format'.
The Django convention is for __repr__ to include the type and __str__
to omit it. In fact its default __repr__ implementation for models
automatically adds a type prefix to __str__, which has resulted in the
type being duplicated:
>>> UserProfile.objects.first()
<UserProfile: <UserProfile: emailgateway@zulip.com <Realm: zulipinternal 1>>>
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While the function which processes the realm registration and
signup remains the same, we use different urls and functions to
call the process so that we can separately track them. This will
help us know the conversion rate of realm registration after
receiving the confirmation link.
This commit refactors the notify_created_user function to
call format_user_row twice with different parameters instead
of modifying the person object returned by format_user_row.
This change makes the code somewhat more easy to understand
than it was before.
dc1eeef30a made the column nullable, with the meaning for null of
"use the current `settings.INVITES_DEFAULT_REALM_DAILY_MAX`."
However, 8a95526ced switched to calling `do_change_plan_type` during
realm creation, which sets `realm.max_invites` based on the plan type,
thus ensuring that no new realms have their `_max_invites` set to
null.
Check `max_invites` instead of `_max_invites`. This requires test
adjustments for the fact that `apply_invite_realm_heuristics` is now
run.
Adds a page to the general api documentation about HTTP headers,
so that information about the special response headers for rate
limits have a more logical location in the docs and so that other
HTTP header information can be shared, such as `User-Agent`
conventions.
Adjusts some text and linking on the rest-error-handling page and
overview page for the REST API for the addition of the HTTP headers
page.
Zulip already has integrations for server-side Sentry integration;
however, it has historically used the Zulip-specific `blueslip`
library for monitoring browser-side errors. However, the latter sends
errors to email, as well optionally to an internal `#errors` stream.
While this is sufficient for low volumes of users, and useful in that
it does not rely on outside services, at higher volumes it is very
difficult to do any analysis or filtering of the errors. Client-side
errors are exceptionally noisy, with many false positives due to
browser extensions or similar, so determining real real errors from a
stream of un-grouped emails or messages in a stream is quite
difficult.
Add a client-side Javascript sentry integration. To provide useful
backtraces, this requires extending the pre-deploy hooks to upload the
source-maps to Sentry. Additional keys are added to the non-public
API of `page_params` to control the DSN, realm identifier, and sample
rates.
b4dd118aa1 changed how the `user_info_str` parsed information out of
the events it received -- but only changed the server errors, not the
browser errors, though both use the same codepath. As a result, all
browser errors since then have been incorrectly marked as being for
anonymous users.
Build and pass in the expected `user` dict into the event.
This commit adds 'visibility_policy' as a
parameter to user_allows_notifications_in_StreamTopic
function.
This adds logic inside the user_allows_notifications_in_StreamTopic
function, to not return False when a stream is muted
but the topic is UNMUTED.
Adds a method `user_id_to_visibility_policy_dict`
to 'StreamTopicTarget' class to fetch
(user_id => visibility_policy) in single db query.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
This commit replaces 'remove_topic_mute' with
'set_user_topic_visibility_policy_in_database' and
updates it to delete UserTopic row with any configured
visibility_policy and not just muting.
In order to support different types of topic visibility policies,
this renames 'add_topic_mute' to
'set_user_topic_visibility_policy_in_database'
and refactors it to accept a parameter 'visibility_policy'.
Create a corresponding UserTopic row for any visibility policy,
not just muting topics.
When a UserTopic row for (user_profile, stream, topic, recipient_id)
exists already, it updates the row with the new visibility_policy.
In the event of a duplicate request, raises a JsonableError.
i.e., new_visibility_policy == existing_visibility_policy.
There is an increase in the database query count in the message-edit
code path.
Reason:
Earlier, 'add_topic_mute' used 'bulk_create' which either
creates or raises IntegrityError -- 1 query.
Now, 'set_user_topic_visibility_policy' uses get_or_create
-- 2 queries in the case of creating new row.
We can't use the previous approach, because now we have to
handle the case of updating the visibility_policy too.
Also, using bulk_* for a single row is not the correct way.
Co-authored-by: Kartik Srivastava <kaushiksri0908@gmail.com>
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
This commit refactors the existing pattern (real-time usage)
used to assert 'date_muted' in tests.
A fixed value is used at the start of the test to
assert 'date_muted', replacing the timedelta or real-time usage pattern.
Replaces 'do_unmute_topic' with 'do_set_user_topic_visibility_policy'
and associated minor changes.
This change is made to align with the plan to use a single function
'do_set_user_topic_visibility_policy' to manage
user_topic - visibility_policy changes and corresponding event
generation.
This commit is a step in the direction of having a common
function to handle visibility_policy changes and event
generation instead of separate functions for each
visibility policy.
In order to support different types of topic visibility policies,
this renames 'do_topic_mute' to 'do_set_user_topic_visibility_policy'
and refactors it to accept a parameter 'visibility_policy'.
The "add_topic_mute" and "remove_topic_mute" library functions
shouldn't be called directly from tests.
They should instead call "do_mute_topic" and "do_unmute_topic"
The reason being:
Library functions are meant to be internal interfaces
for just changing the database, and shouldn't generally be
called elsewhere.
Creates `MutableJsonResponse` as a subclass of Django's `HttpResponse`
that we can modify for ignored parameters in the response content.
Updates responses to include `ignored_parameters_unsupported` in
the response data through `has_request_variables`. Creates unit
test for this implementation in `test_decorators.py`.
The `method` parameter processed in `rest_dispatch` is not in the
`REQ` framework, so for any tests that pass that parameter, assert
for the ignored parameter with a comment.
Updates OpenAPI documentation for `ignored_parameters_unsupported`
being returned in the JSON success response for all endpoints.
Adds detailed documentation in the error handling article, and
links to that page in relevant locations throughout the API docs.
For the majority of endpoints, the documentation does not include
the array in any examples of return values, and instead links to
the error handling page. The exceptions are the three endpoints
that had previously supported this return value. The changes note
and example for these endpoints is also used in the error
handling page.
Adds `is_webhook_view` boolean field to the RequestNotes class so
that (when implemented) `ignored_parameters_unsupported` feature
is not something that is applied to webhooks.
In commit 8181ec4b56, we removed the `realm_str` as a parameter
for `send_message_backed`. This removes a missed test that included
this as a parameter for that endpoint/function.
Actions like deleting realms may leave unreferenced uploads in the
attachment storage backend.
Fix these by walking the complete contents of the attachment storage
backend, and removing files which are no longer present in the
database. This may take quite some time, as it is necessarily O(n) in
the number of files uploaded to the system.
Updates the Asana documentation, which was a detailed version
of the Zapier documentation with screenshots specifically for
Asana, to instead start with the basic incoming webhook steps
and then point to the general Zapier documentation to complete
the integration.
This will be easier to maintain moving forward in the short
term as ideally we'll migrate to a system that documents all
of the integrations with Zulip that are available via Zapier.
Also, updates the current Zapier documentation to mention
Asana as one of the apps that can be integrated with Zulip.
This commit renames reset_emails_in_zulip_realm function to
reset_email_visibility_to_everyone_in_zulip_realm which makes
it more clear to understand what the function actually does.
This commit also adds a comment explaining what this function
does.
The inital Welcome bot message has an extra section if the user is
joining a demo organization, but the link in that section was not
being formatted correctly. Fixes the formatting so that the link
works.
This already became useless in 6e11754642,
as detailed in the API changelog entry here. At this point, we should
eliminate this param and the weird code around it.
This commit also deletes the associated tests added in
6e11754642, since with realm_str removed,
they make no sense anymore (and actually fail with an OpenAPI error due
to using params not used in the API). Hypothetically they could be
translated to use the subdomain= kwarg, but that also doesn't make
sense, since at that point they'd be just testing the case of a user
making an API request on a different subdomain than their current one
and that's just redundant and already tested generally in
test_decorators.
This leftover variable, as a result of older changes, was just always
set to None. That was fine, because when realm=None reaches
check_message further down the codepath, it just infers from
sender.realm. We want to stop passing None like that though, so let's
just set this to user_profile.realm.
Updates the text and title used when the password reset done page
to work for situations where the user is resetting a forgotten
password and for situation where the user is setting a password
for the first time (e.g. SSO login, demo organizations).
This is the behaviour inherited from Django[^1]. While setting the
password to empty (`email_password = `) in
`/etc/zulip/zulip-secrets.conf` also would suffice, it's unclear what
the user would have been putting into `EMAIL_HOST_USER` in that
context.
Because we previously did not warn when `email_password` was not
present in `zulip-secrets.conf`, having the error message clarify the
correct configuration for disabling SMTP auth is important.
Fixes: #23938.
[^1]: https://docs.djangoproject.com/en/4.1/ref/settings/#std-setting-EMAIL_HOST_USER
`./manage.py import` does not take a tarball; it takes a directory.
Making a separate tarball is a waste of CPU time and disk, as it is
never used.
This was included in the commit of the initial Slack conversion code
in 5b37c5562b and propagated from there into every conversion tool.
Remove the unnecessary tarball creation.
c7d0192755 added the unique constraint on
`user_profile_id,message_id,reaction_type,emoji_code`, but left the
existing constraint on `user_profile_id,message_id,emoji_name`. As
explained in the comment added in 3cd543ee98, `emoji_name` cannot be
trusted to be unique, as it is possible to have an Unicode emoji
reaction and a custom emoji with the same name on a message.
Remove the overly-constraining unique index, now that c7d0192755 has
provided the correct one.
View that handled `PATCH user_groups/<int:user_group_id>` required
both name and description parameters to be passed. Due to this
clients had to pass values for both these parameters even if
one of them was changed.
To resolve this name description parameters to
`PATCH user_groups/<int:user_group_id>` are made optional.
We now allow user to change email_address_visibility during user
signup and it overrides the realm-level default and also overrides
the setting if user import settings from existing account.
We do not show UI to set email_address_visibility during realm
creation.
Fixes#24310.
This commit adds backend code to set email_address_visibility when
registering a new user. The realm-level default and the value of
source profile gets overridden by the value user selected during
signup.
This lets us simplify the long-ish ‘../../static/js’ paths, and will
remove the need for the ‘zrequire’ wrapper.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Ever since we started bundling the app with webpack, there’s been less
and less overlap between our ‘static’ directory (files belonging to
the frontend app) and Django’s interpretation of the ‘static’
directory (files served directly to the web).
Split the app out to its own ‘web’ directory outside of ‘static’, and
remove all the custom collectstatic --ignore rules. This makes it
much clearer what’s actually being served to the web, and what’s being
bundled by webpack. It also shrinks the release tarball by 3%.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is quite a bit faster:
```
%timeit calendar.timegm(now.timetuple())
2.91 µs ± 361 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit int(now.timestamp())
539 ns ± 27 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
```
This is particularly important for the presence endpoint, which is a
tight loop of serializing datetimes.
As written, the QOS parameters are (re)set every time ensure_queue is
called, which is every time a message is enqueued. This is wasteful --
particularly QOS parameters only apply for consumers, and setting them
takes a RTT to the server.
Switch to only setting the QOS once, when a connection
is (re)established. In profiling, this reduces the time to call
`queue_json_publish("noop", {})` from 878µs to 150µs.
In the case where a stream existed but had no subscribers, the error
message used to send to the owner always used `stream_name`, which
may have been None.
Switch to using `stream.name` rather than `stream_name` for this case.
This code is called in the hot path when Tornado is processing events.
As such, making this code performant is important. Profiling shows
that a significant portion of the time is spent calling asdict() to
serialize the UserMessageNotificationsData dataclass. In this case
`asdict` does several steps which we do not need, such as attempting
to recurse into its fields, and deepcopy'ing the values of the fields.
In our use case, these add a notable amount of overhead:
```py3
from zerver.tornado.event_queue import UserMessageNotificationsData
from dataclasses import asdict
from timeit import timeit
o = UserMessageNotificationsData(1, False, False, False, False, False, False, False, False, False, False, False)
%timeit asdict(o)
%timeit {**vars(o)}
```
Replace the `asdict` call with a direct access of the fields. We
perform a shallow copy because we do need to modify the resulting
fields.
This commit adds migration to fix extra_data field
of RealmAuditLog objects created on changing
can_remove_subscribers_group setting to add "property"
field since the same event type will now be used for
other group based stream settings that will be added
in future.
We add stream_permission_group_settings object which is
similar to property_types framework used for realm settings.
This commit also adds GroupPermissionSetting dataclass for
defining settings inside stream_permission_group_settings.
We add "do_change_stream_group_based_setting" function which
is called in loop to update all the group-based stream settings
and it is now used to update 'can_remove_subscribers_group'
setting instead of "do_change_can_remove_subscribers_group".
We also change the variable name for event_type field of
RealmAuditLog objects to STREAM_GROUP_BASED_SETTING_CHANGED
since this will be used for all group-based stream settings.
'property' field is also added to extra_data field to identify
the setting for which RealmAuditLog object was created.
We will add a migration in further commits which will add the
property field to existing RealmAuditLog objects created for
changing can_remove_subscribers_group setting.
This old 300s value was meaningfully used in 2 places:
1. In the do_change_user_settings presence_enabled codepath when turning
a user invisible. It doesn't matter there, 140s is just since the
point is to make clients see this user as offline. And 140s is the
threshold used by clients (see the presence.js constant).
2. For calculating whether to set "offline" "status" in
result["presence"]["aggregated"] in get_presence_backend. It's fine
for this to become 140s, since clients shouldn't be looking at the
status value anymore anyway and just do their calculation based on
the timestamps.
This makes use of the new case insensitive UNIQUE index added in the
earlier commit. With that index present, we can now rely solely on the
database to correctly identify duplicates and throw integrity errors as
required.
This will allow us to rely on the database to detect duplicate
`UserTopic`s (with the same `topic_name` with different cases)
and thus correctly throw IntegrityErrors when expected.
This is also important from a correctness point of view, since as
of now, when checking if topic is muted or requesting the backend for
muting a topic, the frontend does not check for case insensitivity.
There might exist duplicate UserTopics (in a case insensitive sense)
which need are removed before creating the new index.
The migration was tested manually using `./manage.py shell`.
In 141b0c4, we added code to handle races caused by duplicate muting
requests. That code can also handle the non-race condition, so we don't
require the first check.
Removes the initial check in `_internal_prep_message` of the length
of the message content because the `check_message` in the try block
will call `normalize_body` on the message content string, which
does a more robust check of the message content (empty string, null
bytes, length). If the message content length exceeds the value of
`settings.MAX_MESSAGE_LENGTH`, then it is truncated based on that
value. Updates associated backend test for these changes.
The removed length check would truncate the message content with a
hard coded value instead of using the value for
`settings.MAX_MESSAGE_LENGTH`.
Also, removes an extraneous comment about removing null bytes. If
there are null bytes in the message content, then `normalize_body`
will raise an error.
Note that the previous check had intentionally reduced any message over
the 10000 character limit to 3900 characters, with the code in
question dating to 2012's 100df7e349.
The 3900 character truncating rule was implemented for incoming emails
with the email gateway, and predated other features to help with
overly long messages (better stripping of email footers via Talon,
introduced in f1f48f305e, and
condensing, introduced in c92d664b44).
While we could preserve that logic if desired, it likely is no longer
a necessary or useful variation from our usual truncation rules.
Updates the descriptions of content parameters (optional and
required) to note that the maximum size of the message content
should be based on the `max_message_length` value returned by
the register endpoint.
Previously these descriptions had a hardcoded value of 10000
bytes as the maximum message size.
Also, updates the description of `max_message_length` to clarify
that the value represents Unicode code points.
The password parameter being passed in the `_do_test` helper
function for `TestAuthenticatedJsonPostViewDecorator` tests was
being ignored, as the user needs to be logged in. Removes the
parameter from the helper function and updates the success test
to use `assert_json_success` instead of just checking the status
code.
Also adds a test case for when a user is not logged in to confirm
that it returns an UnauthorizedError.
This reverts commit 851d68e0fc.
That commit widened how long the transaction is open, which made it
much more likely that after the user was created in the transaction,
and the memcached caches were flushed, some other request will fill
the `get_realm_user_dicts` cache with data which did not include the
new user (because it had not been committed yet).
If a user creation request lost this race, the user would, upon first
request to `/`, get a blank page and a Javascript error:
Unknown user_id in get_by_user_id: 12345
...where 12345 was their own user-id. This error would persist until
the cache expired (in 7 days) or something else expunged it.
Reverting this does not prevent the race, as the post_save hook's call
to flush_user_profile is still in a transaction (and has been since
168f241ff0), and thus leaves the potential race window open.
However, it much shortens the potential window of opportunity, and is
a reasonable short-term stopgap.
The Client.name field is only 30 characters long, but there is no
limit to the length of parsed User-Agent value which we may attempt to
store in it. This can cause requests with long user-agents to 500
when the creation of the Client row fails.
Truncate the name at 30 characters for the cache key, and passing
`name` to `get_or_create`.
This will allow us to re-use this logic later, when we add support for
re-checking notification settings just before sending email/push
notifications to the user.
Also, since this is essentially part of the notifiability logic,
this better belongs to `notification_data.py` and this change will
hopefully reduce the reading complexity of the message-send codepath.
This commits update the code to use user-level email_address_visibility
setting instead of realm-level to set or update the value of UserProfile.email
field and to send the emails to clients.
Major changes are -
- UserProfile.email field is set while creating the user according to
RealmUserDefault.email_address_visbility.
- UserProfile.email field is updated according to change in the setting.
- 'email_address_visibility' is added to person objects in user add event
and in avatar change event.
- client_gravatar can be different for different users when computing
avatar_url for messages and user objects since email available to clients
is dependent on user-level setting.
- For bots, email_address_visibility is set to EVERYONE while creating
them irrespective of realm-default value.
- Test changes are basically setting user-level setting instead of realm
setting and modifying the checks accordingly.
Previously, user objects contained delivery_email field
only when user had access to real email. Also, delivery_email
was not present if visibility setting is set to "everyone"
as email field was itself set to real email.
This commit changes the code to pass "delivery_email" field
always in the user objects with its value being "None" if
user does not have access to real email and real email otherwise.
The "delivery_email" field value is None for logged-out users.
For bots, the "delivery_email" is always set to real email
irrespective of email_address_visibility setting.
Also, since user has access to real email if visibility is set
to "everyone", "delivery_email" field is passed in that case
too.
There is no change in email field and it is same as before.
This commit also adds code to send event to update delivery_email
field when email_address_visibility setting changes to all the
users whose access to emails changes and also changes the code to
send event on changing delivery_email to users who have access
to email.
This is helpful for debugging -- generally these tasks are in a worker
queue because they take a long time to run, so knowing what long task
is about to start before it does, rather than just after, is useful.
This commit adds time restriction on moving messages between streams
using the move_messages_between_streams_limit_seconds setting in the
backend. There is no time limit for admins and moderators.
We now use the newly added move_messages_within_stream_limit_seconds
setting to check for how long the user can edit the topic replacing
the previously used 3-day limit. As it was previously, there is no
time limit for admins and moderators.
This commit renames parse_message_content_edit_or_delete_limit
to parse_message_time_limit_setting and also renames
MESSAGE_CONTENT_EDIT_OR_DELETE_LIMIT_SPECIAL_VALUES_MAP to
MESSAGE_TIME_LIMIT_SETTING_SPECIAL_VALUES_MAP.
We do this change since this function and object will also be
used for message move limit and it makes sense to have a more
generic name.
This commit extracts a function to parse message time limit type settings
and to set it if the new setting value is None.
This function is currently used for message_content_edit_limit_seconds and
message_content_delete_limit_seconds settings and will be used for
message_move_limit_seconds setting to be added in further commits.
In Zulip, message topics are case-insensitive but case-preserving.
The `get_context_for_message` function erroneously did a
case-sensitive search, and thus only messages whose topic matched
exactly were pulled in as context.
Make the missed-message pipeline aware that message topics are not
case-sensitive. This means that, when collapsing adjacent messages,
we merge messages with topic headers which are "different"; create a
separate explicit "grouping" to know which to collapse.
Similar to the previous commit, Django was responsible for setting the
Content-Disposition based on the filename, whereas the Content-Type
was set by nginx based on the filename. This difference is not
exploitable, as even if they somehow disagreed with Django's expected
Content-Type, nginx will only ever respond with Content-Types found in
`uploads.types` -- none of which are unsafe for user-supplied content.
However, for consistency, have Django provide both Content-Type and
Content-Disposition headers.
The Content-Type of user-provided uploads was provided by the browser
at initial upload time, and stored in S3; however, 04cf68b45e
switched to determining the Content-Disposition merely from the
filename. This makes uploads vulnerable to a stored XSS, wherein a
file uploaded with a content-type of `text/html` and an extension of
`.png` would be served to browsers as `Content-Disposition: inline`,
which is unsafe.
The `Content-Security-Policy` headers in the previous commit mitigate
this, but only for browsers which support them.
Revert parts of 04cf68b45e, specifically by allowing S3 to provide
the Content-Disposition header, and using the
`ResponseContentDisposition` argument when necessary to override it to
`attachment`. Because we expect S3 responses to vary based on this
argument, we include it in the cache key; since the query parameter
has dashes in it, we can't use use the helper `$arg_` variables, and
must parse it from the query parameters manually.
Adding the disposition may decrease the cache hit rate somewhat, but
downloads are infrequent enough that it is unlikely to have a
noticeable effect. We take care to not adjust the cache key for
requests which do not specify the disposition.
In nginx, `location` blocks operate on the _decoded_ URI[^1]:
> The matching is performed against a normalized URI, after decoding
> the text encoded in the “%XX” form
This means that if a user-uploaded file contains characters that are
not URI-safe, the browser encodes them in UTF-8 and then URI-encodes
them -- and nginx decodes them and reassembles the original character
before running the `location ~ ^/...` match. This means that the `$2`
_is not URI-encoded_ and _may contain non-ASCII characters.
When `proxy_pass` is passed a value containing one or more variables,
it does no encoding on that expanded value, assuming that the bytes
are exactly as they should be passed to the upstream. This means that
directly calling `proxy_pass https://$1/$2` would result in sending
high-bit characters to the S3 upstream, which would rightly balk.
However, a longstanding bug in nginx's `set` directive[^2] means that
the following line:
```nginx
set $download_url https://$1/$2;
```
...results in nginx accidentally URI-encoding $1 and $2 when they are
inserted, resulting in a `$download_url` which is suitable to pass to
`proxy_pass`. This bug is only present with numeric capture
variables, not named captures; this is particularly relevant because
numeric captures are easily overridden by additional regexes
elsewhere, as subsequent commits will add.
Fixing this is complicated; nginx does not supply any way to escape
values[^3], besides a third-party module[^4] which is an undue
complication to begin using. The only variable which nginx exposes
which is _not_ un-escaped already is `$request_uri`, which contains
the very original URL sent by the browser -- and thus can't respect
any work done in Django to generate the `X-Accel-Redirect` (e.g., for
`/user_uploads/temporary/` URLs). We also cannot pass these URLs to
nginx via query-parameters, since `$arg_foo` values are not
URI-decoded by nginx, there is no function to do so[^3], and the
values must be URI-encoded because they themselves are URLs with query
parameters.
Extra-URI-encode the path that we pass to the `X-Accel-Redirect`
location, for S3 redirects. We rely on the `location` block
un-escaping that layer, leaving `$s3_hostname` and `$s3_path` as they
were intended in Django.
This works around the nginx bug, with no behaviour change.
[^1]: http://nginx.org/en/docs/http/ngx_http_core_module.html#location
[^2]: https://trac.nginx.org/nginx/ticket/348
[^3]: https://trac.nginx.org/nginx/ticket/52
[^4]: https://github.com/openresty/set-misc-nginx-module#set_escape_uri
Fixes the documentation generated from the Markdown macros
{settings_tab|your-bots} and {settings_tab|bot-list-admin} to
match the text labels in the Zulip UI and improves the text of
relative links to explicitly say if we are referring to the Bots
tab of the Personal or Organization settings menu.
Follow-up to #23256.
This code needs to be more flexible to improve the documentation
of items in the Personal and Organization settings menu when
using the `{settings_tab|[setting-name]}` Markdownm macro that
provides relative links or step-by-step instructions.
This commit moves the Markdown formatting code to a new function that
receives tuples from `link_mapping` as input. This is a preliminary
step to offer more flexibility than the current approach.
Rename 'muting.py' to 'user_mutes.py' because it, now
, contains only user-mute related functions.
Includes minor refactoring needed after renaming the file.
This commit moves topic related stuff i.e. topic muting functions
to a separate file 'views/user_topics.py'.
'views/muting.py' contains functions related to user-mutes only.
This will help us track if users actually clicked on the
email confirmation link while creating a new organization.
Replaced all the `reder` calls in `accounts_register` with
`TemplateResponse` to comply with `add_google_analytics`
decorator.
When 'resolve|unresolve' and 'move stream' actions occurs in
the same api call, 'This topic was marked as resolved|unresolved'
notification is not sent.
Both 'topic moved' and 'topic resolved' notification should be generated.
This commit updates the logic of when and where to send
'topic resolve|unresolve' notification. Unlike previous logic, notification
may be sent even in the case 'new_stream' is not None.
In general, 'topic resolved|unresolved' notification is sent to
'stream_being_edited'. In this particular case ('new_stream' is not None),
notification is sent to the 'new_stream' after check.
Test case is included.
Fixes: #22973
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>