PostgreSQL's estimate of the number of usermessage rows for a single
message can be wildly off, due to poor statistics generation. This
causes this query, with 100-message batch sizes, to incorrectly
estimate millions of matched rows, causing it to perform a full-table
index scan, rather than piecemeal using the `message_id` index.
Reduce the batch size to 50, which is enough to tip in favor of a
rational query plan.
The presence of `len(messages)` outside the transaction caused the
full resultset to be fetched outside of the transaction. This should
ideally be inside the transaction, and also only need be the count.
However, also note that the process of counting matching rows, and
then executing a second query which embeds the same query, is
susceptible to phantom reads, where a query with the same conditions
returns different resultsets, under PostgreSQL's default transaction
isolation of "read committed." While this is possible to resolve by
pulling the returned IDs into a Python list, it would not address the
issue that concurrent updates which change the resultset would make
the overall algorithm still incorrect.
Add a comment clarifying the conditions under which the algorithm is
correct. A more correct algorithm would walk the UserMessage rows
which are unread and in the stream, but this requires a
whole-UserMessage index which would be quite large for such an
infrequent use case.
This leads to significant speedups. In a test, with 100 random unique
event classes, the old code processed a batch of 100 rows (on average
66-ish unique in the batch) in 0.45 seconds. Doing this in a single
query processes the same batch in 0.0076 seconds.
We previously created the connection to the outgoing email server when
the EmailSendingWorker was first created. Since creating the
connection can fail (e.g. because of firewalls or typos in the
hostname), this can cause the `QueueProcessingWorker` creation to
raise an exception. In multi-threaded mode, exceptions in the worker
threads which are _not_ during the handling of a specific event
percolate out to `log_and_exit_if_exception` and trigger the
termination of the entire process -- stopping all worker threads from
making forward progress.
Contain the blast radius of misconfigured email servers by deferring
the opening of the connection until it is first needed. This will not
cause any overall performance change, since it only affects the
latency of the very first email after startup.
Given that most of the use cases for realms-only code path would
really like to upload audit logs too, and the others would likely
produce a better user experience if they upoaded audit logs, we
should just have a single main code path here i.e.
'send_analytics_to_push_bouncer'.
We still only upload usage statistics according to documented
option, and only from the analytics cron job.
The error handling takes place in 'send_analytics_to_push_bouncer'
itself.
Change the url in the notification message to point to the settings
interface rather than linking to the export directly.
This is a much better user experience in the case that the export has
been deleted since the time the export was requested.
Fixes: #26923.
The type annotation for functools.partial uses unchecked Any for all
the function parameters (both early and late). returns.curry.partial
uses a mypy plugin to check the parameters safely.
https://returns.readthedocs.io/en/latest/pages/curry.html
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.
An exception which escapes from this loop can kill the background
worker thread; this results in consuming the queue (leading to the
illusion of progress) but more and more rows silently piling up in the
ScheduledMessageNotificationEmail table.
Wrap the inside of the `while True` loop in a try/catch to make sure
that no exceptions escape and kill the background thread. To prevent
even more indentation, the inner loop is extracted into its own
function. It returns true/false to signal if the `self.stopping` was
set to tell the loop to stop; we cannot check it ourselves in the
outer loop because it needs to hold the lock to be examined.
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The MissedMessage queue worker is the single callsite of
`handle_missedmessage_emails`, which immediately transforms the list
of events into a dict keyed by message-id.
Skip the intermediate list step, and use defaultdict and a dataclass
to simplify and make explicit the pieces. This removes the unused
user_profile_id and message_id pieces of the data structure.
For tests that use the dev server, like test-api, test-js-with-puppeteer,
we don't have the consumers for the queues. As they eventually timeout,
we get unnecessary error messages. This adds a new flag, disable_timeout,
to disable this behavior for the test cases.
Realm exports may OOM on deployments with low memory; to ensure
forward progress, log the start time in the RealmAuditLog entry, and
key off of the existence of that to prevent re-attempting an export
which was already tried once.
We previously hard-coded 6 threads for the realm export; in low-memory
environments, spawning 6 threads for an export can lean to an OOM,
which kills the process and leaves a partial export on disk -- which
is then tried again, since the export was never completed. This leads
to excessive disk consumption and brief repeated outages of all other
workers, until the failing export job is manually de-queued somehow.
Lower the export to only use on thread if it is already running in a
multi-threaded environment. Note that this does not guarantee forward
progress, it merely makes it more likely that exports will succeed in
low-memory deployments.
The previous implementation leaked database connections, as a new
thread (and thus a new thread-local database connection) was made for
each timer execution. While these connections were relatively
lightweight in Python, they also incur memory overhead in the
PostgreSQL server itself. The logic for managing the timer was also
unclear, and the unavoidable deadlock in the stopping logic was rather
unfortunate.
Rewrite with one explicit worker thread which handles the delayed
message sending. The RabbitMQ consumer creates the database rows, and
notifies the worker to start its 5s timeout. Because it is controlled
by a condition variable, it does not hold the lock while waiting, and
can be notified to exit.
This is helpful for debugging -- generally these tasks are in a worker
queue because they take a long time to run, so knowing what long task
is about to start before it does, rather than just after, is useful.
return inside finally blocks causes exceptions to be silenced.
Although these blocks follow blanket ‘except Exception’ handlers, they
do not seem to have a goal of silencing BaseException and exceptions
thrown by the exception handler, so rewrite them to avoid it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Both of these compatibility blocks can be deleted, since you can't
upgrade directly to any supported release from the versions where the
old event formats would be used.
Previously, an active production Zulip server would experience a class
of deadlocks caused by two or more concurrent bulk update operations
on the UserMessage table.
This is because UPDATE ... SET ... WHERE statements that execute in
parallel take row-level UPDATE locks as they get results; since the
query plans may result in getting rows in different orders between two
queries, this can result in deadlocks.
Some databases allow ORDER BY on their UPDATE ... WHERE statements;
PostgreSQL does not. In PostgreSQL, the answer is to do a sub-select
with an ORDER BY ... FOR UPDATE to ensure consistent ordering on row
locks.
We do this all code paths using bitand or bitor as part of bulk
editing message flags, which should ensure that these concurrent
operations obtain row level locks on the table in the same order.
Fixes#19054.