This swaps out url_format_string from all of our APIs and replaces it
with url_template. Note that the documentation changes in the following
commits will be squashed with this commit.
We change the "url_format" key to "url_template" for the
realm_linkifiers events in event_schema, along with updating
LinkifierDict. "url_template" is the name chosen to normalize
mixed usages of "url_format_string" and "url_format" throughout
the backend.
The markdown processor is updated to stop handling the format string
interpolation and delegate the task template expansion to the uri_template
library instead.
This change affects many test cases. We mostly just replace "%(name)s"
with "{name}", "url_format_string" with "url_template" to make sure that
they still pass. There are some test cases dedicated for testing "%"
escaping, which aren't relevant anymore and are subject to removal.
But for now we keep most of them as-is, and make sure that "%" is always
escaped since we do not use it for variable substitution any more.
Since url_format_string is not populated anymore, a migration is created
to remove this field entirely, and make url_template non-nullable since
we will always populate it. Note that it is possible to have
url_template being null after migration 0422 and before 0424, but
in practice, url_template will not be None after backfilling and the
backend now is always setting url_template.
With the removal of url_format_string, RealmFilter model will now be cleaned
with URL template checks, and the old checks for escapes are removed.
We also modified RealmFilter.clean to skip the validation when the
url_template is invalid. This avoids raising mulitple ValidationError's
when calling full_clean on a linkifier. But we might eventually want to
have a more centric approach to data validation instead of having
the same validation in both the clean method and the validator.
Fixes#23124.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Refactors instances of `message_type_name` and `message_type`
that are referring to API message type value ("stream" or
"private") to use `recipient_type_name` instead.
Prep commit for adding "direct" as a value for endpoints with a
`type` parameter to indicate whether the message is a stream or
direct message.
This code is called in the hot path when Tornado is processing events.
As such, making this code performant is important. Profiling shows
that a significant portion of the time is spent calling asdict() to
serialize the UserMessageNotificationsData dataclass. In this case
`asdict` does several steps which we do not need, such as attempting
to recurse into its fields, and deepcopy'ing the values of the fields.
In our use case, these add a notable amount of overhead:
```py3
from zerver.tornado.event_queue import UserMessageNotificationsData
from dataclasses import asdict
from timeit import timeit
o = UserMessageNotificationsData(1, False, False, False, False, False, False, False, False, False, False, False)
%timeit asdict(o)
%timeit {**vars(o)}
```
Replace the `asdict` call with a direct access of the fields. We
perform a shallow copy because we do need to modify the resulting
fields.
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
A missed message email notification, where the message is the welcome
message sent by the welcome bot on account creation, get sent when
the user somehow not focuses the browser tab during account creation.
No missed message email or push notifications should be sent for the
messages generated by the welcome bot.
'internal_send_private_message' accepts a parameter
'disable_external_notifications' and is set to 'True' when the sender
is 'welcome bot'.
A check is introduced in `trivially_should_not_notify`, not to notify
if `disable_external_notifications` is true.
TestCases are updated to include the `disable_external_notifications`
check in the early (False) return patterns of `is_push_notifiable` and
`is_email_notifiable`.
One query reduced for both `test_create_user_with_multiple_streams`
and `test_register`.
Reason: When welcome bot sends message after user creation
`do_send_messages` calls `get_active_presence_idle_user_ids`,
`user_ids` in `get_active_presence_idle_user_ids` remains empty if
`disable_external_notifications` is true because `is_notifiable` returns
false.
`get_active_presence_idle_user_ids` calls `filter_presence_idle_user_ids`
and since the `user_ids` is empty, the query inside the function doesn't
get executed.
MissedMessageHookTest updated.
Fixes: #22884
Doing rapid pace mark-as-unread in the Zulip web application, one
observed assertion failures showing that the server would send an
event containing multiple message IDs but only one of the messages
present in the message_details side data structure.
The cause of this was the "virtual events" compression system; two
flags/remove/read events were being combined by simply concatenating
the lists of events, without any attempt to merge the
`message_details` field on those objects.
The immediate fix is to disable virtual events compression for this
event class, but it's not unlikely we'll need to just eliminate the
virtual_events system entirely, because it seems difficult to make it
soundly handle a message whose state for a given flag changes back and
forth while the client is offline.
But we'll leave that for later, since removing that optimization
deserves more discussion than fixing this event corruption bug.
return inside finally blocks causes exceptions to be silenced.
Although these blocks follow blanket ‘except Exception’ handlers, they
do not seem to have a goal of silencing BaseException and exceptions
thrown by the exception handler, so rewrite them to avoid it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Mobile clients older than v27.192 do not support PRONOUNS type
custom profile fields, so we instead change the type of it to
SHORT_TEXT in the data sent with register response and also in
the events sent to those clients.
One should now be able to configure a regex by appending _regex to the
port number:
[tornado_sharding]
9802_regex = ^[l-p].*\.zulipchat\.com$
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We now send a new user_topic event while muting and unmuting topics.
fetch_initial_state_data now returns an additional user_topics array to
the client that will maintain the user-topic relationship data.
This will support any future addition of new features to modify the
relationship between a user-topic pair.
This commit adds the relevent backend code and schema for the new
event.
Instead of using `request.POST` to access the `data` parameter used
in the internal `notify_tornado` path, adds `has_request_variables`
decorator and accesses `data` as a `REQ` parameter.
Expands `test_tornado_endpoint` in `test_event_system.py` for
`data` being a required parameter for this path.
This was for the old /messages/latest API that was removed in commit
e06722657a.
If we wanted a new check like this, it shouldn’t go in zulip_finish,
because that only runs when the client gets an asynchronous response
from polling an initially-empty queue, and not when the client gets a
synchronous response from polling a nonempty queue.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For the same reason as `handler_id` has, we define `_request`
as an attribute. Note that the name `request` is already taken.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This prevents us from relying on a side-effect of `allocate_handler_id`
that monkey-patches `handler_id` on the `AsyncDjangoHandler` object,
allowing mypy to acknowledge the existence of `handler_id` as an `int`.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Commit 6fd1a558b7 (#21469) introduced an
await point where get_events_backend calls fetch_events in order to
switch threads. This opened the possibility that, in the window
between the connect_handler call in fetch_events and the old location
of this assignment in get_events_backend, an event could arrive,
causing ClientDescriptor.add_event to crash on missing
handler._request. Fix this by assigning handler._request earlier.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
According to the documentation: “Pika does not have any notion of
threading in the code. If you want to use Pika with threading, make
sure you have a Pika connection per thread, created in that thread. It
is not safe to share one Pika connection across threads, with one
exception: you may call the connection method add_callback_threadsafe
from another thread to schedule a callback within an active pika
connection.”
https://pika.readthedocs.io/en/stable/faq.html
This also means that synchronous Django code running in Tornado will
use its own synchronous SimpleQueueClient rather than sharing the
asynchronous TornadoQueueClient, which is unfortunate but necessary as
they’re about to be on different threads.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We previously forked tornado.autoreload to work around a problem where
it would crash if you introduce a syntax error and not recover if you
fix it (https://github.com/tornadoweb/tornado/issues/2398).
A much more maintainable workaround for that issue, at least in
current Tornado, is to use tornado.autoreload as the main module.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
When `update_message` events were updated to have a consistent
format for both normal message updates/edits and special
rendering preview updates, the logic used in the tornado event
queue processor to identify the special events for sending
notifications no longer applied.
Updates that logic to use the `rendering_only` flag (if present)
that was added to the `update_message` event format to identify
if the event processor should potentially send notifications to
users.
For upgrade compatibility, if `rendering_only` flag is not present,
uses previous event structure and checks for the absence of the
`user_id` property, which indicated the special rendering preview
updates.
Fixes#16022.
`cachify` is essentially caching the return value of a function using only
the non-keyword-only arguments as the key.
The use case of the function in the backend can be sufficiently covered by
`functools.lru_cache` as an unbound cache. There is no signficant difference
apart from `cachify` overlooking keyword-only arguments, and
`functools.lru_cache` being conveniently typed.
Signed-off-by: Zixuan James Li <359101898@qq.com>
Adds request as a parameter to json_success as a refactor towards
making `ignored_parameters_unsupported` functionality available
for all API endpoints.
Also, removes any data parameters that are an empty dict or
a dict with the generic success response values.