Otherwise, if consume_func raised an exception for any reason *other*
than the alarm being fired, the still-pending alarm would have fired
later at some arbitrary point in the calling code.
We need two try…finally blocks in case the signal arrives just before
signal.alarm(0).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Replaced ImageOps.fit by ImageOps.pad, in zerver/lib/upload.py, which
returns a sized and padded version of the image, expanded to fill the
requested aspect ratio and size.
Fixes part of #16370.
SIGALRM is the simplest way to set a specific maximum duration that
queue workers can take to handle a specific message. This only works
in non-threaded environments, however, as signal handlers are
per-process, not per-thread.
The MAX_CONSUME_SECONDS is set quite high, at 10s -- the longest
average worker consume time is embed_links, which hovers near 1s.
Since just knowing the recent mean does not give much information[1],
it is difficult to know how much variance is expected. As such, we
set the threshold to be such that only events which are significant
outliers will be timed out. This can be tuned downwards as more
statistics are gathered on the runtime of the workers.
The exception to this is DeferredWorker, which deals with quite-long
requests, and thus has no enforceable SLO.
[1] https://www.autodesk.com/research/publications/same-stats-different-graphs
Currently, drain_queue and json_drain_queue ack every message as it is
pulled off of the queue, until the queue is empty. This means that if
the consumer crashes between pulling a batch of messages off the
queue, and actually processing them, those messages will be
permanently lost. Sending an ACK on every message also results in a
significant amount lot of traffic to rabbitmq, with notable
performance implications.
Send a singular ACK after the processing has completed, by making
`drain_queue` into a contextmanager. Additionally, use the `multiple`
flag to ACK all of the messages at once -- or explicitly NACK the
messages if processing failed. Sending a NACK will re-queue them at
the front of the queue.
Performance of a no-op dequeue before this change:
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10847 / sec
Dequeue rate: 2479 / sec
```
Performance of a no-op dequeue after this change (a 25% increase):
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10752 / sec
Dequeue rate: 3079 / sec
```
For the lines of code that I changed here, we were
getting field reports that the below code
was getting `undefined`:
emoji.all_realm_emojis.get(r.emoji_code)
It's not really clear to me how this could happen,
but we definitely should fail softly here. We
still report it as an error, but we let the function
return and don't trigger a TypeError.
If there's a legitimate reason for realms to delete
realm emojis, we should either downgrade this to a
warning or consider a strategy of back-fixing messages
when realm emojis get deleted.
This enables core-js modules for proposals marked as finished between
core-js 3.0 and 3.6. It’s recommended upstream, although it has no
current effect for us.
The PROVISION_VERSION bump is skipped because there is no actual
dependency change.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This has the consequence of turning on loose mode for
@babel/proposal-class-properties, which generates slightly smaller
code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Part of #16094.
Strings constructed by _() were not being
translated in the /stats page.
This was because session variable was not set.
Ideally this should have been a part of b82bda9.
Part of #16094.
Moved the language selection preference logic from home.py to a new
function in i18n.py to avoid repetition in analytics views and home
views.
For users who are not authenticated, we don't need to 2fa them,
we only need it once they are trying to login.
Tweaked by tabbott to be much more readable; the new style might
require new test coverage.
We set wildcard_mention_policy in the test database so that we can
avoid future changes in mention puppeteer tests, as the default
membership of streams in the Zulip development organization is large
enough to prevent random users from using wildcard mentions.
We add a new wildcard_mention_policy setting to handle wildcard
mentions in large streams, with a wide range of policies available to
organizations.
We set the default to the safe option for preventing accidental spam:
only stream administrators being able to use wildcard mentions in
large streams.
We rename all_everyone_warn_threshold to
wildcard_mention_large_stream_threshold as we would
be adding wildcard_mention_policy and this
constant will also be used to show error
in case when wildcard_mention_policy is set
to admins only.
This prevents the memcached connection from being shared across
multiple processes, and hopefully addresses unexpected behavior from
cached functions like get_user_profile_by_id invoked inside the worker
processes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For dropdown elements, use bootstrap styling. The styling
was not applied by default from bootstrap since we
use a combination of dropdown + simplebar for this element.
This doesn't match the expected structure of elements by
bootstrap since simplebar inserts elements of its own.
The style is same as in bootstrap v2.3.2 for
.dropdown-menu > li > a.
This reverts commit 5275d49f05
(effectively), which created more problems than it solves. #8484 is
not a bug: a newline can be included literally with no escaping within
POSIX quotes. Meanwhile, $"" is a bashism, and not even the correct
bashism: it translates strings using the LC_MESSAGES catalog. If the
user wants to do something complicated, they can consult the
documentation for their shell.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The rabbitmq cron jobs exist in order to call rabbitmqctl as root and
write the output to files that nagios can consume, since nagios is not
allowed to run rabbitmqctl.
In systems which do not have nagios configured, these every-minute
cron jobs add non-insignificant load, to no effect. Move their
installation into `zulip_ops`. In doing so, also combine the cron.d
files into a single file; this allows us to `ensure => absent` the old
filenames, removing them from existing systems. Leave the resulting
combined cron.d file in `zulip`, since it is still of general utility
and note.