Without an explicit port number, the `stdout_logfile` values for each
port are identical. Supervisor apparently decides that it will
de-conflict this by appending an arbitrary number to the end:
```
/var/log/zulip/tornado.log
/var/log/zulip/tornado.log.1
/var/log/zulip/tornado.log.10
/var/log/zulip/tornado.log.2
/var/log/zulip/tornado.log.3
/var/log/zulip/tornado.log.7
/var/log/zulip/tornado.log.8
/var/log/zulip/tornado.log.9
```
This is quite confusing, since most other files in `/var/log/zulip/`
use `.1` to mean logrotate was used. Also note that these are not all
sequential -- 4, 5, and 6 are mysteriously missing, though they were
used in previous restarts. This can make it extremely hard to debug
logs from a particular Tornado shard.
Give the logfiles a consistent name, and set them up to logrotate.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
Clients that close their socket to nginx suddenly also cause nginx to close
its connection to uwsgi. When uwsgi finishes computing the response,
it thus tries to write to a closed socket, and generates either
IOError or SIGPIPE failures.
Since these are caused by the _client_ closing the connection
suddenly, they are not actionable by the server. At particularly high
volumes, this could represent some sort of server-side failure;
however, this is better detected by examining status codes at the
loadbalancer. nginx uses the error code 499 for this occurrence:
https://httpstatuses.com/499
Stop uwsgi from generating this family of exception entirely, using
configuration for uwsgi[1]; it documents these errors as "(annoying),"
hinting at their general utility."
[1] https://uwsgi-docs.readthedocs.io/en/latest/Options.html#ignore-sigpipe
Increasing the uwsgi listen backlog is intended to allow it to handle
higher connection rates during server restart, when many clients may
be trying to connect. The kernel, in turn, needs to have a
proportionally increased somaxconn soas to not refuse the connection.
Set somaxconn to 2x the uwsgi backlog, but no lower than the
default (128).
Prior to PostgreSQL 12, the `recovery_target_timeline` setting is only
valid in a `recovery.conf` file, as that file has its own
configuration parser. As such, including it in `postgresql.conf`
results in an error, and PostgreSQL will fail to start.
Remove the setting, reverting bff3b540b1. This fixes PostgreSQL 9.5,
9.6, 10, and 11; while the setting is not an error in a PostgreSQL 12
configuration file, it is unnecessary since `latest` is the default.
7d4a370a57 attempted to move the replication check to on the
PostgreSQL hosts. While it updated the _check_ to assume it was
running and talking to a local PostgreSQL instance, the configuration
and installation for the check were not updated. As such, the check
ran on the nagios host for each DB host, and produced no output.
Start distributing the check to all apopdb hosts, and configure nagios
to use the SSH tunnel to get there.
wal-g was used in `puppet/zulip` by env-wal-g, but only installed in
`puppet/zulip_ops`.
Merge all of the dependencies of doing backups using wal-g (wal-g
installation, the pg_backup_and_purge job, the nagios plugin that
verifies it happens) into a common base class in `puppet/zulip`, since
it is generally useful.
No plugins are installed inside the /usr/local/munin/lib this creates
in munin-node, nor are they symlinked into /etc/munin/plugins, so
non-default plugins are added by this.
The one complexity is that hosts_fullstack are treated differently, as
they are not currently found in the manual `hosts` list, and as such
do not get munin monitoring.
check_memcached does not support memcached authentication even in its
latest release (it’s in a TODO item comment, and that’s it), and was
never particularly useful.
When supervisor is first installed, it is started automatically, and
creates the socket, owned by root. Subsequent reconfiguration in
puppet only calls `reread + update`, which is insufficient to apply
the `chown = zulip:zulip` line in `supervisord.conf`, leaving the
socket owned by `root` and the last part of the installation unable to
restart `supervisor` services as the `zulip` user. The `chown` line
in `scripts/lib/install` exists to paper over this.
Add a separate exec target for changes to `supervisord.conf` itself,
which restarts the full service. This leaves the default `restart`
action on the service for the lightweight `reread + update` action,
which is more common.
We use `systemctl` only on redhat-esque builds, because CI runs
Ubuntu, but init is not systemd in that context. `systemctl reload`
is sufficient to re-apply the socket ownership, but a full `restart`
and not `reload` is necessary under `/etc/init.d/supervisor`.
wal-g has a slihghtly different format than wal-e in its `backup-list`
output; it only contains three columns:
- `name`
- `last_modified`,
- `wal_segment_backup_start`
..rather than wal-e's plethora, most of which were blank:
- `name`
- `last_modified`
- `expanded_size_bytes`
- `wal_segment_backup_start`
- `wal_segment_offset_backup_start`
- `wal_segment_backup_stop`
- `wal_segment_offset_backup_stop`
Remove one argument from the split.
In Bionic, nagios-plugins-basic is a transitional package which
depends on monitoring-plugins-basic. In Focal, it is a virtual
package, which means that every time puppet runs, it tries to
re-install the nagios-plugins-basic package.
Switch all instances to referring to `$zulip::common::nagios_plugins`,
and repoint that to monitoring-plugins-basic.