The "invites" worker exists to do two things -- make a Confirmation
object, and send the outgoing email. Making the Confirmation object
in a background process from where the PreregistrationUser is created
temporarily leaves the PreregistrationUser in invalid state, and
results in 500's, and the user not immediately seeing the sent
invitation. That the "invites" worker also wants to create the
Confirmation object means that "resending" an invite invalidates the
URL in the previous email, which can be confusing to the user.
Moving the Confirmation creation to the same transaction solves both
of these issues, and leaves the "invites" worker with nothing to do
but send the email; as such, we remove it entirely, and use the
existing "email_senders" worker to send the invites. The volume of
invites is small enough that this will not affect other uses of that
worker.
Fixes: #21306Fixes: #24275
Factor out the repeated pattern of taking a lock, or immediately
aborting with a message if it cannot be acquired. The exit code in
that situation is changed to be exit code 1, rather than the successful
0; we are likely missing new work since that process started.
We move the lockfiles to a common directory under `/srv/zulip-locks`
rather than muddy up `/home/zulip/deployments`.
If there is a replication primary configured, and no current database,
then we check all of the required secrets are in place, then pull down
the latest backup and trigger a PostgreSQL restart, which will pick up
downloading the remaining WAL logs to catch up, then start streaming
from the configured primary.
This is specifically to support Kandra's `setup_disks`, which stops
PostgreSQL and moves the data directory out of the way while mounting
a new disk; restarting PostgreSQL would fail in this state. We
install secrets and re-run puppet to finish bootstrapping the
database, all of which expects the PostgreSQL server to be stopped
anyways.
PostgreSQL will need to use wal-g to pull needed WAL files. We do not
express this as a direct dependency because it is possible to have
wal-g without PostgreSQL, as well as PostgreSQL without wal-g.
The `tidy` parameter is buggy, and ignores all ordering
metaparameters. This is fixed in Puppet 7[^1], but it's helpful to
resolve it now. Specifically, this fixes bugs with tidy running too
early, and deleting the old version of a package before its new
version is installed or symlinked, leaving a race condition if
anything tries to run the binary in this window.
This is mostly not a problem for Supervisor-managed processes, since
the binary is already running, and can continue to run if it is tidied
out from under the running process. For stand-alone tools like wal-g,
which are run frequently by PostgreSQL, this may cause issues if
PostgreSQL tries to call them during a puppet run.
Remove all complicated uses of tidy, and replace them with an `exec`
which does the equivalent. We also generate `file` resources for
binaries, making them easier (and clearer) to specify as dependencies.
[^1]: https://puppet.atlassian.net/browse/PUP-10688
Without `FIND_FULL`, `wal_g delete before ...` will fail, rather than
delete a base backup which is needed by the delta backups after it.
By passing `FIND_FULL`[^1], we tell it explicitly that we're OK
preserving files before the specified one, as long as they are
necessary for the delta chain.
[^1]: https://github.com/wal-g/wal-g/blob/master/docs/README.md#delete
This commit adds a post upgrade hook to run the
'send_zulip_update_announcements' management command.
The aim is to improve UX for self hosters by sending
zulip updates as soon as the upgrade completes instead
of waiting for the cron to run.
By default, autossh writes to syslog; setting AUTOSSH_DEBUG is the
only way to produce output to STDERR. Timestamp that and log that to
the logfile, making the logs perhaps useful.
These came in via d0dcc8bf26, which looks like it copied the comment
from the provisioning code. Production installs (even from git) do
not call `./manage.py makemessages`, so there is no reason to require
this for production deployments.
Streaming replication may be used even if `wal-g` is not -- as long as
the user can move a copy of the base backup to the replica (e.g. using
`pg_basebackup`). Remove the warning about this combination, and move
the `primary_conninfo` setting outside of the `s3_backups_bucket`
check.
The process of running Django's built-in database and config checks
can be very heavy-weight, potentially taking multiple seconds:
```
$ hyperfine './manage.py print_initial_password iago@zulip.com' './manage.py print_initial_password iago@zulip.com --skip-checks'
Benchmark 1: ./manage.py print_initial_password iago@zulip.com
Time (mean ± σ): 4.943 s ± 0.722 s [User: 4.434 s, System: 0.311 s]
Range (min … max): 4.415 s … 6.835 s 10 runs
Benchmark 2: ./manage.py print_initial_password iago@zulip.com --skip-checks
Time (mean ± σ): 1.786 s ± 0.113 s [User: 1.598 s, System: 0.162 s]
Range (min … max): 1.576 s … 1.999 s 10 runs
Summary
'./manage.py print_initial_password iago@zulip.com --skip-checks' ran
2.77 ± 0.44 times faster than './manage.py print_initial_password iago@zulip.com'
```
This extends the window during which nginx is forced to serve 502's to
clients. f5f6a3789b added an explicit `manage.py check` during
server restarts, and fa77be6e6c added one during upgrades; as such,
we expect that any check failures will already have been caught when
performing a restart or upgrade, and there is no point in running them
on process startup.
It is not possible to have upgraded from 4.x to this version without
having run puppet at least once, since there are no shared OS versions
in between them. Remove these `absent`/`purged` blocks which we know
to have already been run.
puppet hard-fails if it can't find the binary to run in `$PATH`, so we
need to make the `unless` short-circuit to false if puppet itself is
not installed yet (as during initial installation).
These default to off, because in situations with thousands of queues,
consumers, and producers, they cause unreasonable overhead. Our use
case has few enough queues that we do want to be able to inspect them
individually.
Enable per-object Prometheus metrics, per [1].
[1]: 78851828ec/deps/rabbitmq_prometheus (configuration)
We require a `pg_dump` whose version matches the version of the server
we are configured against (see 3a8b4b0205). Installing the latest
`postgresql-client` does not guarantee that we have such a binary
present.
This only defaults to on for local-disk backups, since they are more
disk-size-sensitive, and local accesses are quite cheap compared to
loading multiple incremental backups from S3.