Updates areas in the API documentation that reference the maximum
length of a stream message topic to note the `max_topic_length`.
Updates areas in the API documentation that reference the maximum
length of a stream name to note the `max_stream_name_length` and
areas that reference the maximum length of a stream description to
note the `max_stream_description_length`.
All of these maximum values are sent by the `POST /register`
response.
This removes the validator argument for 0423_realmfilter_url_template,
which do not really alter the database schema. It otherwise fails
the migration because the filter_format_validator function is removed.
Migration 0094_realm_filter_url_validator is modified because we can no
longer refer to filter_format_validator.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is mainly updating the variable names and relevant docstrings
without actual change to the behavior of the command.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This swaps out url_format_string from all of our APIs and replaces it
with url_template. Note that the documentation changes in the following
commits will be squashed with this commit.
We change the "url_format" key to "url_template" for the
realm_linkifiers events in event_schema, along with updating
LinkifierDict. "url_template" is the name chosen to normalize
mixed usages of "url_format_string" and "url_format" throughout
the backend.
The markdown processor is updated to stop handling the format string
interpolation and delegate the task template expansion to the uri_template
library instead.
This change affects many test cases. We mostly just replace "%(name)s"
with "{name}", "url_format_string" with "url_template" to make sure that
they still pass. There are some test cases dedicated for testing "%"
escaping, which aren't relevant anymore and are subject to removal.
But for now we keep most of them as-is, and make sure that "%" is always
escaped since we do not use it for variable substitution any more.
Since url_format_string is not populated anymore, a migration is created
to remove this field entirely, and make url_template non-nullable since
we will always populate it. Note that it is possible to have
url_template being null after migration 0422 and before 0424, but
in practice, url_template will not be None after backfilling and the
backend now is always setting url_template.
With the removal of url_format_string, RealmFilter model will now be cleaned
with URL template checks, and the old checks for escapes are removed.
We also modified RealmFilter.clean to skip the validation when the
url_template is invalid. This avoids raising mulitple ValidationError's
when calling full_clean on a linkifier. But we might eventually want to
have a more centric approach to data validation instead of having
the same validation in both the clean method and the validator.
Fixes#23124.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This is implemented by replacing all matches of "%(var_name)s" in a URL
format string with "{var_name}". Since we do want to ensure that the
templates aren't broken after this migration, a RuntimeError is raised
to let the maintainer know that certain linkifier cannot be converted
automatically if it does not pass the uri_template.validate check.
Also, we need to escape "%%", which is used to represent "%" in the old
format string syntax, as well as "{" and "}", which is a part of the
URL template syntax.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This will later be used to expand matching linkifier patterns.
Making it nullable for now, but we will make it required in
the APIs.
As a part of this transition, we temporarily make url_format_string
nullable as well, which will be later removed. This allows us to
switch to populating url_template without caring about passing
url_format_string.
Note that the validators are imported in the migration because Django
otherwise diffs it and considers the schema to be different, generating
a migration, failing the "tools/test-migrations" test.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
208c0c3034 fixed this for new deleted users, but left existing users
with potentially invalid email addresses. This is problematic if the
realm is ever exported and re-imported, as the addresses will not
validate.
Add a migration which attempts to fix these invalid email addresses.
This commit updates the logic for migrating user_topic rows
during the move-messages operation when the target topic
already has messages.
Previously, the target_topic's visibility_policy was simply
set to the original_topic's visibility_policy,
and the original_topic's visibility_policy was set to INHERIT.
This commit updates the move-messages code path to determine
the new visibility_policy depending on the visibility policies
of the original and target topics.
The target_topic's visibility_policy is then updated.
The number of db queries has increased by two:
One query corresponds to determining if 'target_topic_has_messages'.
Another query corresponds to 'get_users_with_user_topic_visibility_policy'
to determine 'target_topic_user_profile_to_visibility_policy'.
This prep commit updates the lib function
'topic_has_visibility_policy' to add support for the case
when visibility_policy=INHERIT.
Previously, it had support for all the visibility policies
except INHERIT.
This commit refactors the move user_topic records
code block in 'do_update_message', resulting in
clean code.
We directly iterate over the dictionary items
instead of looping over the keys and fetching
values if the key exists.
Moves jwt_fetch_api_key endpoint to v1_api_mobile_patterns so
that tools/test-api detects it as an API endpoint that is pending
documentation.
Fixes#24982.
For endpoints with a `type` parameter to indicate whether the message
is a stream or direct message, `POST /typing` and `POST /messages`,
adds support for passing "direct" as the preferred value for direct
messages, group and 1-on-1.
Maintains support for "private" as a deprecated value to indicate
direct messages.
Fixes#24960.
Refactors instances of `message_type_name` and `message_type`
that are referring to API message type value ("stream" or
"private") to use `recipient_type_name` instead.
Prep commit for adding "direct" as a value for endpoints with a
`type` parameter to indicate whether the message is a stream or
direct message.
So far, we've used the BitField .authentication_methods on Realm
for tracking which backends are enabled for an organization. This
however made it a pain to add new backends (requiring altering the
column and a migration - particularly troublesome if someone wanted to
create their own custom auth backend for their server).
Instead this will be tracked through the existence of the appropriate
rows in the RealmAuthenticationMethods table.
If the ID of the scheduled message is passed by the client, we
edit the existing scheduled message instead of creating a new one.
However, this will soon be moved into its own API endpoint.
Servers that had upgraded from a Zulip server version that did not yet
support the user_uuid field to one that did could end up with some
mobile devices having two push notifications registrations, one with a
user_id and the other with a user_uuid.
Fix this issue by sending both user_id and user_uuid, and clearing
This commit fixes the comment about number of database queries
when moving message from muted topic to mention clearly about
the number of queries added due to original topic being muted.
We do not include the queries that is executed to check whether
the topic is muted or not, as they will be executed in all cases.
We previously allowed moving messages that have passed the time limit
using "change_all" value for "propagate_mode" parameter. This commit
changes the behavior to not allow moving messages (both stream and
topic edit) that have passed the time limit for non-admin and
non-moderator users.
Previously, editing topic of "(no topic)" messages was allowed
irrespective of time limit or the "edit_topic_policy" setting.
Since we are working in the direction of having "no topic" messages
feel reasonable, this commit changes the code to not consider them
as a special case and topic editing restrictions apply to them as
well now like all other messages.
We still highlight the topic edit icon in recipient bar without
hovering for "no topic" messages, but it is only shown when user
has permission to edit topics.
Separates the context dictionary that is used for `send_email` for
the `followup_day1` and `followup_day2` emails.
Prep commit for updates to `followup_day2` email.
The previous implementation leaked database connections, as a new
thread (and thus a new thread-local database connection) was made for
each timer execution. While these connections were relatively
lightweight in Python, they also incur memory overhead in the
PostgreSQL server itself. The logic for managing the timer was also
unclear, and the unavoidable deadlock in the stopping logic was rather
unfortunate.
Rewrite with one explicit worker thread which handles the delayed
message sending. The RabbitMQ consumer creates the database rows, and
notifies the worker to start its 5s timeout. Because it is controlled
by a condition variable, it does not hold the lock while waiting, and
can be notified to exit.
Per the issue #25045, this commit changes some occurences of `uri`
appeared in variable `root_domain_uri`. Files affected are some
html files that used this variables and a backend file
`context_processors.py` that set it as a key.
Adds a new welcome email, `onboarding_zulip_guide`, to be sent four
days after a new user registers with a Zulip organization if the
organization has specified a particular organization type that has
a guide in the corporate `/for/.../` pages. If there is no guide,
then no email is scheduled or sent.
The current `for/communities/` page is not very useful for users
who are not organization administrators, so these onboarding guide
emails are further restricted for those organization types to
only go to new users who are invited/registered as admins for the
organzation.
Adds two database queries for new user registrations: one to get
the organization's type and one to create the scheduled email.
Adds two email logs because the email is sent both to a new user
who registers with an existing organization and to the organization
owner when they register a new organization.
Co-authored by: Alya Abbott <alya@zulip.com>
Refactors the logic for adjusting the delay for sending an email
to not land on a weekend so that it can be used to schedule any
number of onboarding emails we decide to send.
Consolidates duplicate testing into
`zerver/tests/test_email_notifications.py`. The initial test and
function were introduced in commit 610f2cbacf with the test
located in `zerver/tests/test_signup.py`.
Prep commit for adding new welcome / follow up email.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
This commit renames the 'tornado_redirected_to_list' context
manager to 'capture_send_event_calls' to improve readability.
It also refactors the function to yield a list of events
instead of passing in a list data structure as a parameter
and appending events to it.
Improve the Notification Bot by adding a hyperlink to the new location
of a moved single message. The link will make it easier for users to
find the message in its new context.
Fixes#24604.
After this commit a notification message is sent to users if they are
added to user_groups by someone else or they are removed from user_groups
by someone else.
Fixes#23642.
This commit passes the body of the PR Review as the message to
the helper function that generates the message to be sent by the
GitHub Integration.
Previously when a PR Review was done the message sent would just
include the link of the review but the message didn't include the
body the review. After this commit, the message also includes the
body of the review.
Fixes#24676
Previously, we had an architecture where CSS inlining for emails was
done at provision time in inline_email_css.py. This was necessary
because the library we were using for this, Premailer, was extremely
slow, and doing the inlining for every outgoing email would have been
prohibitively expensive.
Now that we've migrated to a more modern library that inlines the
small amount of CSS we have into emails nearly instantly, we are able
to remove the complex architecture built to work around Premailer
being slow and just do the CSS inlining as the final step in sending
each individual email.
This has several significant benefits:
* Removes a fiddly provisioning step that made the edit/refresh cycle
for modifying email templates confusing; there's no longer a CSS
inlining step that, if you forget to do it, results in your testing a
stale variant of the email templates.
* Fixes internationalization problems related to translators working
with pre-CSS-inlined emails, and then Django trying to apply the
translators to the post-CSS-inlined version.
* Makes the send_custom_email pipeline simpler and easier to improve.
Signed-off-by: Daniil Fadeev <fadeevd@zulip.com>
Adds the user ID to the return values for the `/fetch_api_key` and
`/dev_fetch_api_key` endpoints. This saves clients like mobile a
round trip to the server to get the user's unique ID as it is now
returned as part of the log in flow.
Fixes#24980.
This commit updates 'test_user_ids_unmuting_topic' to make
an api_post call to '/api/v1/user_topics' instead of
calling the internal function 'do_set_user_topic_visibility_policy'
to verify the logic.
This commit adds a new endpoint, 'POST /user_topics' which
is used to update the personal preferences for a topic.
Currently, it is used to update the visibility policy of
a user-topic row.
This is a prep commit that renames lib functions
so that they can be used while implementing view
for the new endpoint 'POST /user_topics'.
We use a more generic name when removing the visibility_policy of
a topic, i.e., 'access_stream_to_remove_visibility_policy_by_id/name'
instead of 'access_stream_for_unmute_topic_by_id/name' which focused
on removing MUTE from a topic.
Primary goal of library replacement is improving execution speed.
This commit should not affect the functionality of the system
or make any changes to it.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
- Being more specific about what the user will get.
- Putting less emphasis on entering multiple emails, since most
people probably just have one email they need to check.
- Using more intuitive wording and hint that deactivated or
deleted accounts won't be included.
Fixes: #24890.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
This is a prep commit to help make the changes to make changes to pull
event message easier. Our Bitbucket has been using a custom template to
render the reviewers. This means that values are fixed to how the templates
like it. These changes will allow `get_pull_request_event_message` to
support reviewer and allow for a easier and flexible adjustment to these
messages if needed.
Previously, the assignee message would stick around in the middle of the
event message. This doesn't look as good as if we put it to the end of
the event message. These changes does just that and move the assignee
messages towards the end of the event message to make it look better
and cleaner for the readers.
Previously, there was a stale code that didn't verify
if 'muted_topics' and 'user_topic' events are sent correctly.
This commit updates the test to verify if the expected
users are notified via 'muted_topics' and 'user_topic'
events.
This commit updates the move-topic codepath to perform
bulk database operations on the UserTopic record using
user_profiles for each visibility_policy instead of
previously looping over each user_profile one by one.
This commit refactors 'set_user_topic_visibility_policy_in_database'
to perform bulk database operations and the related changes.
There is an increase in database query count because requests
to delete user_topic rows now take two queries instead of one.
This is required for logging the info for a request to delete
a non-existent user_topic row while performing bulk operations
at the same time.
The overall query count will be lower while performing
bulk operations (multiple user_profiles instead of one).
This commit updates the 'do_update_message' codepath to
update the UserTopic records regardless of visibility policy
during the "move-topic" operation.
This is required before offering new visibility policies
in the UI.
Previously, UserTopic records were moved or deleted only
for objects with a MUTED visibility policy.
Fixes: #24574
This is a prep commit that renames 'set_topic_mutes' and
'topic_is_muted' to 'set_topic_visibility_policy' and
'topic_has_visibility_policy' respectively, and refactors
them to work with any visibility_policy, not only MUTED.
Previously, some call sites for the function provided optional
arguments as positional arguments. These changes will allow the
arguments to be passed as keyword arguments to the function and
fix up the call sites of the function to pass keyword arguments
instead.
Previously, some call sites for the function provided optional
arguments as positional arguments. These changes will allow the
arguments to be passed as keyword arguments to the function and
fix up the call sites of the function to pass keyword arguments
instead.
Previously, tests that exercised code paths that added local
uploads did not always clean up `settings.LOCAL_UPLOADS_DIR`
after the test was complete.
Updates the `ZulipTestCase` class to remove any local uploads
in the unique `settings.LOCAL_UPLOADS_DIR` in `tearDown` for
all tests.
This commit adds code to create a "Nobody" system user group
to realms which will be used in settings to represent "Nobody"
option.
We also add a migration to add this group to existing realms.
This commit updates the pattern for dealing with tuples
returned by the delete() query.
The '(num_deleted, ignored) = ModelName.objects.filter().delete()'
pattern is preferred due to better readability.
We avoid the pattern '(num_deleted, _)' because Django uses _
for translation, which may lead to future bugs.
This commit adds a new helper submit_realm_creation_form,
similar to existing submit_reg_form_for_user, to avoid
duplicate code for creating realms in tests.
This commit adds the fields related to realm creation form using
get_realm_create_form_context in the context passed to register.html
template to avoid duplication.
Since we have updated the registration code to use
PreregistrationRealm objects for realm creation in
previous commits, some of the code has become
redundant and this commit removes it.
We remove the following code -
- The modification to PreregistrationUser objects in
process_new_human_user can now be done unconditionally
because prereg_user is passed only during user creation
and not realm creation. And we anyway do not expect
any PreregistrationUser objects inside the realm
during the creation.
- There is no need of "realm_creation" parameter in
create_preregistration_user function, since we now
use create_preregistration_realm during realm creation.
Fixes part of #24307.
In previous commits, we updated the realm creation flow to show
the realm name, type and subdomain fields in the first form
when asking for the email of the user. This commit updates the
user registration form to show the already filled realm details
as non-editable text and there is also a button to edit the
realm details before registration.
We also update the sub-heading for user registration form as
mentioned in the issue.
Fixes part of #24307.
We now use PreregistrationRealm objects in registration_helper
function when creating new realms instead of PreregistrationUser
objects.
Fixes part of #24307.
We now show inputs for realm details like name, type and URL
in the create_realm.html template opened for "/new" url and
these information will be stored in PreregistrationRealm
objects in further commits.
We add a new class RealmDetailsForm in forms.py for this
such that it is used as a base class for RealmCreationForm
and we define RealmDetailsForm such that we can use it as
a subclass for RegistrationForm as well to avoid duplication.
This commit adds PreregistrationRealm class which will be
similar to PreregistrationUser and will store initial
information of the realm before its creation as we are
changing the organization creation flow as per #24307.
Fixes part of #24307.
This commit renames prereg_user variable in
check_prereg_key and get_prereg_key_and_redirect
functions in zerver/views/registration.py to
prereg_object as in further commits the
preregistration object could also be
PreregistrationRealm object as part of changes
for #24307.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
Previously, when a user moves a message to another topic, the Notification
bot will post a message saying "This topic was moved here from..." This is
confusing when the topic already contains messages. The changes aims to make
the messages more clear by changing the logic for the Notification bot. When
there is already messages in the topic, the bot will post "A message was
moved here from..." or "N messages were moved here from...". The bot will
post "This topic was moved here from (somewhere) by (someone)." when the
topic is empty.
Fixes#23267.
To avoid people calling "create_user_group" instead of
"check_add_user_group", we rename it to make its purpose clearer.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
"check_add_user_group" is a safer helper function than
"create_user_group" to use when creating user_groups. It does
error handling and notify the client with the appropriate event.
Note that the populate_db command still uses "create_user_group"
because we do not need to enqueue events at that point.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Since this function creates a new user group into the database,
it is more appropriate to have it not as a generic "lib" function
but as an "action".
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Some well-intentioned adblockers also block Sentry client-side error
reporting. Provide an endpoint on the Zulip server which forwards to
the Sentry server, so that these requests are not blocked.
Prior to commit a9b3a9c, the server implementation for documented
search operators with dashes, also implicitly supported clients
sending those same operators with underscores. This has been the
case sense the server side support for narrow filtering was
introduced in commit 3af2bf345a.
Updates the stricter version of mapping operator strings to `by*`
functions, to also include the underscore version of any operators
that have dashes. Adds a note that these undocumented versions are
tied to the support for the documented versions.
This is a follow up to #24673, we want to modify every webhook events to
follow the same pattern and consistency where branch name should only
show on opened and merged events.
We previously created RealmAuditLog entries for user notification
settings only. This commit changes the code to create entries for
all user settings. We cannot backfill the entries since we don't
have the data to do that.
When a new realm is created, a notification message is sent to
the realm configured as the settings.SYSTEM_BOT_REALM if there
is a "signups" stream that exists in that realm. This is used
for Zulip Cloud, but is an undocumented feature.
The topic of the message has been the subdomain of the new realm,
and the message content has been "Signups enabled" translated
into the default language of the new realm.
In order to make these messages more explicitly for Zulip Cloud,
the settings.CORPORATE_ENABLED is checked before sending these
messages.
To make these messages more useful, the topic for these
notifications is changed to be "new organizations". The content
of these messages is updated to have the new realm name (with a
link to the admin realm's activity support page for the realm),
subdomain (with a link to the realm), and organization type.
Use the built-in HTML escaping of Markup("…{var}…").format(), in order
to allow Semgrep to detect mistakes like Markup("…{var}…".format())
and Markup(f"…{var}…").
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Updates the logic for identifying the method to use to extend the
query for the given term from a narrow to use a dictionary that
maps the operator string to the by_* method in the NarrowBuilder
class.
Previously, the by_* method was determined by building a string
based on the operator string and replacing dashes with underscores.
This is a follow up to #24673, we want to modify every webhook
events to follow the same pattern and consistency where branch name
should only show on opened and merged events.
The RealmCount statistics will be empty if the realm was created since
the last daily aggregation. In cases where the daily stats have no
rows, it is likely fast enough to do the real count in the messages
table. This stops unduly penalizing folks who have actually sent
messages, and are just inviting people within the first day.
Prior to aa032bf62c, QOS prefetch was set on every `publish` and
before every `start_json_consumer` -- which had a large and
unnecessary effect on publishing rates, which don't care about the
prefetch QOS settings at all, much less re-setting them before every
publish.
Unfortunately, that change had the effect of causing prefetch settings
to almost never be respected -- since the configuration happened in
`ensure_queue`s re-check that the connection was still live. The
initial connection is established in `__init__` via `_connect`, and
the consumer only calls `ensure_queue` once, before setting up the
consumer.
Having no prefetch value set causes an unbounded prefetch; this
manifests itself as the server attempting to shove every event down to
the worker as soon as it starts consuming; if the client cannot keep
up, the server closes the connection. The worker observes the
connection has been shut down, and restarts. While this does make
forward progress, it causes large queues to make progress more slowly,
as they suffer from sporadic restarts.
Shift the QOS configuration to when the connection is set up, which is
a more sensible place for it in general -- and ensures that it is set
on consumers and producers alike, but only once per connection
establishment.
`render_markdown_path` renders Markdown, and also (since baff121115)
runs Jinja2 on the resulting HTML.
The `pure_markdown` flag was added in 0a99fa2fd6, and did two
things: retried the path directly in the filesystem if it wasn't found
by the Jinja2 resolver, and also skipped the subsequent Jinja2
templating step (regardless of where the content was found). In this
context, the name `pure_markdown` made some sense. The only two
callsites were the TOS and privacy policy renders, which might have
had user-supplied arbitrary paths, and we wished to handle absolute
paths in addition to ones inside `templates/`.
Unfortunately, the follow-up of 01bd55bbcb did not refactor the
logic -- it changed it, by making `pure_markdown` only do the former
of the two behaviors. Passing `pure_markdown=True` after that commit
still caused it to always run Jinja2, but allowed it to look elsewhere
in the filesystem.
This set the stage for calls, such as the one introduced in
dedea23745, which passed both a context for Jinja2, as well as
`pure_markdown=True` implying that Jinja2 was not to be used.
Split the two previous behaviors of the `pure_markdown` flag, and use
pre-existing data to control them, rather than an explicit flag. For
handling policy information which is stored at an absolute path
outside of the template root, we switch to using the template search
path if and only if the path is relative. This also closes the
potential inconsistency based on CWD when `pure_markdown=True` was
passed and the path was relative, not absolute.
Decide whether to run Jinja2 based on if a context is passed in at
all. This restores the behavior in the initial 0a99fa2fd6 where a
call to `rendar_markdown_path` could be made to just render markdown,
and not some other unmentioned and unrelated templating language as
well.
Previously, `QuerySet` does not support isinstance check since it is
defined to be generic in django-stubs. In a recent update, such check is
possible by using `QuerySetAny`, a non-generic alias of `QuerySet`.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This adds tests for more corner cases, in exchange for dropping the
query count tests, which were of dubious utility. It also adds the
time-machine library to mock the current time to test that the limits
do expire.