After topic list is updated, only restore focus to it, if it was
focused before. This avoids jumping focus from say compose box
to topic search unexpectedly when the topic list is updated.
Because Camo includes logic to deny access to private subnets, routing
its requests through Smokescreen is generally not necessary. However,
it may be necessary if Zulip has configured a non-Smokescreen exit
proxy.
Default Camo to using the proxy only if it is not Smokescreen, with a
new `proxy.enable_for_camo` setting to override this behaviour if need
be. Note that that setting is in `zulip.conf` on the host with Camo
installed -- not the Zulip frontend host, if they are different.
Fixes: #20550.
For `no_serve_uploads`, `http_only`, which previously specified
"non-empty" to enable, this tightens what values are true. For
`pgroonga` and `queue_workers_multiprocess`, this broadens the
possible values from `enabled`, and `true` respectively.
This catches nine functional indexes that the previous query didn’t:
upper_preregistration_email_idx
upper_stream_name_idx
upper_subject_idx
upper_userprofile_email_idx
zerver_message_recipient_upper_subject
zerver_mutedtopic_stream_topic
zerver_stream_realm_id_name_uniq
zerver_userprofile_realm_id_delivery_email_uniq
zerver_userprofile_realm_id_email_uniq
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Use a popover which displays both the options instead of long text.
We only use a small text indicating the current state which user
can click on to trigger the popover.
My PR #18974 introduced a bug where the logged-in dropdown pill on
the /help pages stopped working and has been unresponsive ever
since. This was caused by the incorrect assumption that each
`.dropdown` would be inside an unordered list `ul`. However, that
isn't the case for the dropdown pill on the /help pages. This commit
simply removes said assumption, by widening the scope of the relevant
CSS selectors.
Updating from pygments 2.10.x to 2.11.x brings new lexers,
including the new Savi lexer which is needed by the Savi community
in our Zulip chat at https://savi.zulipchat.com/.
Restarting the uwsgi processes by way of supervisor opens a window
during which nginx 502's all responses. uwsgi has a configuration
called "chain reloading" which allows for rolling restart of the uwsgi
processes, such that only one process at once in unavailable; see
uwsgi documentation ([1]).
The tradeoff is that this requires that the uwsgi processes load the
libraries after forking, rather than before ("lazy apps"); in theory
this can lead to larger memory footprints, since they are not shared.
In practice, as Django defers much of the loading, this is not as much
of an issue. In a very basic test of memory consumption (measured by
total memory - free - caches - buffers; 6 uwsgi workers), both
immediately after restarting Django, and after requesting `/` 60 times
with 6 concurrent requests:
| Non-lazy | Lazy app | Difference
------------------+------------+------------+-------------
Fresh | 2,827,216 | 2,870,480 | +43,264
After 60 requests | 3,332,284 | 3,409,608 | +77,324
..................|............|............|.............
Difference | +505,068 | +539,128 | +34,060
That is, "lazy app" loading increased the footprint pre-requests by
43MB, and after 60 requests grew the memory footprint by 539MB, as
opposed to non-lazy loading, which grew it by 505MB. Using wsgi "lazy
app" loading does increase the memory footprint, but not by a large
percentage.
The other effect is that processes may be served by either old or new
code during the restart window. This may cause transient failures
when new frontend code talks to old backend code.
Enable chain-reloading during graceful, puppetless restarts, but only
if enabled via a zulip.conf configuration flag.
Fixes#2559.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#chain-reloading-lazy-apps
Moves `flags` field to top part of object description because
it is always included in the event.
If a field is present only for certain types of message updates,
the description begins by stating when the field is present:
"Only present if ...".
These fields are organized by the type of message update:
stream, stream and/or topic, topic, content.
If a field is not present due to a special event, the description
ends by stating when the field is not present:
"Not present if ...".
Adds documentation for fields currently required to be returned
with any `update_message` event.
We make the banner, mentioning the user to confirm new email
after changing the email through settings, sticky and it
disappears either on reload or after confirming the new email.
Fixes#20686.
In the previous commit (bea41e975d) we
introduced a bug by using `hotkey` instead of `hotkey.name`, further
debugging revealed that the conditional was unnecessary and as such
has been removed in this commit, the comment for the is_numeric
conditional has also been changed to explain its actual purpose.
Some other inline comments have also been moved to be on their own
lines.
In commit 1d54b383bd we introduced some
changes to add better support for keyboard navigation with the global
time widget. Unfortunately, as a result of the fact that
get_keydown_hotkey returns undefined for numeric keys, we caused a
regression that prevented users from typing into the time picker.
Additionally, we also lost support for the backspace and delete keys.
Hence, this commit fixes the above bug by early returning in two
places if the key pressed is backspace, or delete or a numeric key.
do_delete_users had two bugs:
1. Creating the replacement dummy users
with active=True
2. Creating the replacement dummy users with email domain set to
realm.uri, which may not be a valid email domain.
Prior commits fixed the bugs, and this migration fixes the pre-existing
objects.
Otherwise the dummy user can be created with an invalid email domain -
e.g. in development environment with the domain
"@http://localhost:9991". get_fake_email_domain exists exactly for
handling these kinds of scenarios.
Stop using `access_user_group_by_id` in notifications codepaths, as it
is meant to be used to check for _write_ access, not read
access (which is not limited). In the notification codepaths, there
are no ACLs to apply, and the ID is known-good; just load it
directly. The `for_mention` flag is removed, as it was not used in the
mention codepaths at all, only the notification ones.
get_remote_server_by_uuid (called in validate_api_key) raises
ValidationError when given an invalid UUID due to how Django handles
UUIDField. We don't want that exception and prefer the ordinary
DoesNotExist exception to be raised.
Fix another tidy error caused by 1e4e6a09af23; as also noted in
f9a39b6703, these resources are necessary such that tidy does not
cleanup of smokescreen, and then force a recompilation of it again.
APNs payloads nest the zulip-custom data further than the top level,
as Android notifications do. This led to APNs data silently never
being truncated; this case was not caught in tests because the mocks
provided the wrong data for the APNs structure.
Adjust to look in the appropriate place within the APNs data, and
truncate that.
This replaces the temporary (and testless) fix in
24b1439e93 with a more permanent
fix.
Instead of checking if the user is a bot just before
sending the notifications, we now just don't enqueue
notifications for bots. This is done by sending a list
of bot IDs to the event_queue code, just like other
lists which are used for creating NotificationData objects.
Credit @andersk for the test code in `test_notification_data.py`.
Previously, we suffered a bug where we would not properly condense
messages on first load of CZO (ie after login).
This bug was an unintended consequence of setting recent topics as the
default view, because since the page loads to recent_topics the
message_list is hidden but still gets rendered into the DOM and when
condense_and_collapse runs, it causes get_message_height to cache a
message height of 0, which results in the message not being collapsed.
There may be other ways to trigger the same broken mechanism.
This commit changes the function so we only return 0 but don't cache
the result.
Fixes: #20666.
1e4e6a09af removed the resources for the unpacked directory, on the
argument that they were unnecessary. However, the directory (or file,
see below) that is unpacked must be managed, or it will be tidied on
the next puppet apply.
Add back the resource for `$dir`, but mark it `ensure => present`, to
support tarballs which only unpack to a single file (e.g. wal-g).
As explained in the comments in the code, just doing UUID(string) and
catching ValueError is not enough, because the uuid library sometimes
tries to modify the string to convert it into a valid UUID:
>>> a = '18cedb98-5222-5f34-50a9-fc418e1ba972'
>>> uuid.UUID(a, version=4)
UUID('18cedb98-5222-4f34-90a9-fc418e1ba972')
The existing callsites of this are via `source` or being inline'd into
the startup of a new host; in both of these cases, the surrounding
script is already `set -eu`. However, if run as a standalone tool, it
should also configure itself to catch checksum failures and other
problems.