There's no need for sharding, but this allows one to spend a bit of
extra memory to reduce image-processing latency when bursts of images
are uploaded at once.
A new table is created to track which path_id attachments are images,
and for those their metadata, and which thumbnails have been created.
Using path_id as the effective primary key lets us ignore if the
attachment is archived or not, saving some foreign key messes.
A new worker is added to observe events when rows are added to this
table, and to generate and store thumbnails for those images in
differing sizes and formats.
Some S3 backends (e.g. garage or minio behind caddy) are unable to
respond to TLS requests that only have the Host header set. This makes
sure those configurations are supported going forward.
Clients making requests to Zulip with a `Authorization: Basic ...` for
an upload in S3 pass along all of their request headers to the S3
backend -- causing errors of the form:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Only one auth mechanism allowed; only the X-Amz-Algorithm
query parameter, Signature query string parameter or the
Authorization header should be specified</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>Basic ...</ArgumentValue>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
```
Strip off all request headers which AWS reports that S3 may read[^1].
Fixes: #30180.
[^1]: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html
This adds `--automated` and `--no-automated` flags to all Zulip
management commands, whose default is based on if STDIN is a TTY.
This enables cron jobs and supervisor commands to continue to report
to Sentry, and manually-run commands (when reporting to Sentry does
not provide value, since the user can see them) to not.
Note that this only applies to Zulip commands -- core Django
commands (e.g. `./manage.py`) do not grow support for `--automated`
and will always report exceptions to Sentry.
`manage.py` subcommands in the `upgrade` and `restart-server` paths
are marked as `--automated`, since those may be run semi-unattended,
and they are useful to log to Sentry.
This saves us the time of shelling out to a new python process,
loading all of Django, and printing one value we could just have read
in-process. It is unclear why we ever did it this way.
The "invites" worker exists to do two things -- make a Confirmation
object, and send the outgoing email. Making the Confirmation object
in a background process from where the PreregistrationUser is created
temporarily leaves the PreregistrationUser in invalid state, and
results in 500's, and the user not immediately seeing the sent
invitation. That the "invites" worker also wants to create the
Confirmation object means that "resending" an invite invalidates the
URL in the previous email, which can be confusing to the user.
Moving the Confirmation creation to the same transaction solves both
of these issues, and leaves the "invites" worker with nothing to do
but send the email; as such, we remove it entirely, and use the
existing "email_senders" worker to send the invites. The volume of
invites is small enough that this will not affect other uses of that
worker.
Fixes: #21306Fixes: #24275
Factor out the repeated pattern of taking a lock, or immediately
aborting with a message if it cannot be acquired. The exit code in
that situation is changed to be exit code 1, rather than the successful
0; we are likely missing new work since that process started.
We move the lockfiles to a common directory under `/srv/zulip-locks`
rather than muddy up `/home/zulip/deployments`.
If there is a replication primary configured, and no current database,
then we check all of the required secrets are in place, then pull down
the latest backup and trigger a PostgreSQL restart, which will pick up
downloading the remaining WAL logs to catch up, then start streaming
from the configured primary.
This is specifically to support Kandra's `setup_disks`, which stops
PostgreSQL and moves the data directory out of the way while mounting
a new disk; restarting PostgreSQL would fail in this state. We
install secrets and re-run puppet to finish bootstrapping the
database, all of which expects the PostgreSQL server to be stopped
anyways.
PostgreSQL will need to use wal-g to pull needed WAL files. We do not
express this as a direct dependency because it is possible to have
wal-g without PostgreSQL, as well as PostgreSQL without wal-g.