By default, autossh writes to syslog; setting AUTOSSH_DEBUG is the
only way to produce output to STDERR. Timestamp that and log that to
the logfile, making the logs perhaps useful.
These came in via d0dcc8bf26, which looks like it copied the comment
from the provisioning code. Production installs (even from git) do
not call `./manage.py makemessages`, so there is no reason to require
this for production deployments.
Streaming replication may be used even if `wal-g` is not -- as long as
the user can move a copy of the base backup to the replica (e.g. using
`pg_basebackup`). Remove the warning about this combination, and move
the `primary_conninfo` setting outside of the `s3_backups_bucket`
check.
The process of running Django's built-in database and config checks
can be very heavy-weight, potentially taking multiple seconds:
```
$ hyperfine './manage.py print_initial_password iago@zulip.com' './manage.py print_initial_password iago@zulip.com --skip-checks'
Benchmark 1: ./manage.py print_initial_password iago@zulip.com
Time (mean ± σ): 4.943 s ± 0.722 s [User: 4.434 s, System: 0.311 s]
Range (min … max): 4.415 s … 6.835 s 10 runs
Benchmark 2: ./manage.py print_initial_password iago@zulip.com --skip-checks
Time (mean ± σ): 1.786 s ± 0.113 s [User: 1.598 s, System: 0.162 s]
Range (min … max): 1.576 s … 1.999 s 10 runs
Summary
'./manage.py print_initial_password iago@zulip.com --skip-checks' ran
2.77 ± 0.44 times faster than './manage.py print_initial_password iago@zulip.com'
```
This extends the window during which nginx is forced to serve 502's to
clients. f5f6a3789b added an explicit `manage.py check` during
server restarts, and fa77be6e6c added one during upgrades; as such,
we expect that any check failures will already have been caught when
performing a restart or upgrade, and there is no point in running them
on process startup.
It is not possible to have upgraded from 4.x to this version without
having run puppet at least once, since there are no shared OS versions
in between them. Remove these `absent`/`purged` blocks which we know
to have already been run.
puppet hard-fails if it can't find the binary to run in `$PATH`, so we
need to make the `unless` short-circuit to false if puppet itself is
not installed yet (as during initial installation).
These default to off, because in situations with thousands of queues,
consumers, and producers, they cause unreasonable overhead. Our use
case has few enough queues that we do want to be able to inspect them
individually.
Enable per-object Prometheus metrics, per [1].
[1]: 78851828ec/deps/rabbitmq_prometheus (configuration)
We require a `pg_dump` whose version matches the version of the server
we are configured against (see 3a8b4b0205). Installing the latest
`postgresql-client` does not guarantee that we have such a binary
present.
This only defaults to on for local-disk backups, since they are more
disk-size-sensitive, and local accesses are quite cheap compared to
loading multiple incremental backups from S3.
Replace a separate call to subprocess, starting `node` from scratch,
with an optional standalone node Express service which performs the
rendering. In benchmarking, this reduces the overhead of a KaTeX call
from 120ms to 2.8ms. This is notable because enough calls to KaTeX in
a single message would previously time out the whole message
rendering.
The service is optional because he majority of deployments do not use
enough LaTeX to merit the additional memory usage (60Mb).
Fixes: #17425.
This makes no immediate reloads the default for runtornado, matching
the production configuration, and changes the development incantation
to be the one to specify the departure from the norm, with
--immediate-reloads.
Decouple the sending of client restart events from the restarting of
the servers. Restarts use the new Tornado restart-clients endpoint to
inject "restart" events into queues of clients which were loaded from
the previous Tornado process. The rate is controlled by the
`application_server.client_restart_rate`, in clients per minute, or a
flag to `restart-clients` which overrides it. Note that a web client
will also spread its restart over 5 minutes, so artificially-slow
client restarts are generally not very necessary.
Restarts of clients are deferred to until after post-deploy hooks are
run, such that the pre- and post- deploy hooks are around the actual
server restarts, even if pushing restart events to clients takes
significant time.
Note that this uses `ssh-keyscan` to write in the currently-observed
host fingerprint; if DNS or network is untrusted during initial puppet
apply, this can allow attackers to write their own host key, obviating
the utility of known_hosts.
We do not view this as a likely attack mechanism, since in our
deployment the network and DNS is almost certainly trusted, and if
not, the timing attack to catch only initial configuration is likely
impossible.
572443edc6 removed the callsite that triggered the exec in
`zulip::systemd_daemon_reload`, making its inclusion and ordering via
`require` moot.
Remove the call.
This should give some more room for systems that are still below 4GB
of RAM to use the lower-memory multithreaded mode, which is less
likely to have OOM kills (a very bad experience).
There should be little cost, as few systems are likely allocated with
memory in this range.
The only relevant changes are `PasswordAuthentication no` (which
is now the default) and `MaxStartups 40:50:60` (which is now
unneccesary due to autossh tunnels.
This was changed midway through the implementation, from reading it
from `zulip-secrets.conf`, and a couple locations still reference the
secrets path.
Under heavy request load, it is possible for the conntrack kernel
table to fill up (by default, 256k connections). This leads to DNS
requests failing because they cannot make a new conntrack entry.
Allow all port-53 UDP traffic in and out without connection tracking.
This means that unbound port-53 traffic is no longer filtered out by
the on-host firewall -- but it is already filtered out at the border
firewall, so this does not change the external network posture.
`systemd-resolve` also only binds to 127.0.0.53 on the loopback
interface, so there is no server to attack on inbound port 53.
fcf096c52e removed the callsite which would have notified this
contact. Note that the source config file was presumably installed via the
python-zulip-api package.
149bea8309 added a separate config file
for smokescreen (which is necessary because it can be installed
separately) but failed ot notice that `zulip.template.erb` already had
a config line for it. This leads to failures starting the logrotate
service:
```
logrotate[4158688]: error: zulip:1 duplicate log entry for /var/log/zulip/smokescreen.log
logrotate[4158688]: error: found error in file zulip, skipping
```
Remove the duplicate line.
While the Tornado server supports POST requests, those are only used
by internal endpoints. We only support OPTIONS, GET, and DELETE
methods from clients, so filter everything else out at the nginx
level.
We set `Accepts` header on both `OPTIONS` requests and 405 responses,
and the CORS headers on `OPTIONS` requests.