These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
In an initial install, the following is a potential rule ordering:
```
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/conf.d/zulip]/ensure: created
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/supervisord.conf]/content: content changed '{md5}99dc7e8a1178ede9ae9794aaecbca436' to '{md5}7ef9771d2c476c246a3ebd95fab784cb'
Notice: /Stage[main]/Zulip::Supervisor/Exec[supervisor-restart]: Triggered 'refresh' from 1 event
[...]
Notice: /Stage[main]/Zulip::App_frontend_base/File[/etc/supervisor/conf.d/zulip/zulip.conf]/ensure: defined content as '{md5}d98ac8a974d44efb1d1bb2ef8b9c3dee'
[...]
Notice: /Stage[main]/Zulip::App_frontend_once/File[/etc/supervisor/conf.d/zulip/zulip-once.conf]/ensure: defined content as '{md5}53f56ae4b95413bfd7a117e3113082dc'
[...]
Notice: /Stage[main]/Zulip::Process_fts_updates/File[/etc/supervisor/conf.d/zulip/zulip_db.conf]/ensure: defined content as '{md5}96092d7f27d76f48178a53b51f80b0f0'
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
```
The last line is misleading -- supervisor was already started by the
`supervisor-restart` process on the third line. As can be shown with
`zulip-puppet-apply --debug`, the last line just installs supervisor
to run on startup, using `systemctl`:
```
Debug: Executing: 'supervisorctl status'
Debug: Executing: '/usr/bin/systemctl unmask supervisor'
Debug: Executing: '/usr/bin/systemctl start supervisor'
```
This means the list of processes started by supervisor depends
entirely on which configuration files were successfully written out by
puppet before the initial `supervisor-restart` ran. Since
`zulip_db.conf` is written later than the rest, the initial install
often fails to start the `process-fts-updates` process. In this
state, an explicit `supervisorctl restart` or `supervisorctl reread &&
supervisorctl update` is required for the service to be found and
started.
Reorder the `supervisor-restart` exec to only run after the service is
started. Because all supervisor configuration files have a `notify`
of the service, this forces the ordering of:
```
(package) -> (config files) -> (service) -> (optional restart)
```
On first startup, this will start and them immediately restart
supervisor, which is unfortunate but unavoidable -- and not terribly
relevant, since the database will not have been created yet, and thus
most processes will be in a restart loop for failing to connect to it.
The sysvinit script for supervisor has a long-standing bug where
`/etc/init.d/supervisor restart` stops but does not then start the
supervisor process.
Work around this by making restart then try to start, and return if it
is currently running.
Not having the package installed will cause startup failures in
`process_fts_updates`; ensure that we've installed the package before
we potentially start the service.
93f62b999e removed the last file in
puppet/zulip/files/nagios_plugins/zulip_nagios_server, which means the
singular rule in zulip::nagios no longer applies cleanly.
Remove the `zulip::nagios` class, as it is no longer needed.
An organization with at most 5 users that is behind on payments isn't
worth spending time on investigating the situation.
For larger organizations, we likely want somewhat different logic that
at least does not void invoices.
Staging and other hosts that are `zulip::app_frontend_base` but not
`zulip::app_frontend_once` do not have a
/etc/supervisor/conf.d/zulip/zulip-once.conf and as such do not have
`zulip_deliver_scheduled_emails` or `zulip_deliver_scheduled_messages`
and thus supervisor will fail to reload.
Making the contents of `zulip-workers` contingent on if the server is
_also_ a `-once` server is complicated, and would involve using Concat
fragments, which severely limit readability.
Instead, expel those two from `zulip-workers`; this is somewhat
reasonable, since they are use an entirely different codepath from
zulip_events_*, using the database rather than RabbitMQ for their
queuing.
This is similar cleanup to 3ab9b31d2f, but only affects zulip_ops
services; it serves to ensure that any of these services which are no
longer enabled are automatically removed from supervisor.
Note that this will cause a supervisor restart on all affected hosts,
which will restart all supervisor services.
Failure to do this results in:
```
psql: error: failed to connect to `host=localhost user=zulip database=zulip`: failed to write startup message (x509: certificate is valid for [redacted], not localhost)
```
Host-based md5 auth for 127.0.0.1 must be removed from `pg_hba.conf`,
otherwise password authentication is preferred over certificate-based
authentication for localhost.
Nagios refuses to allow any modifications with use_authentication off;
re-enabled "authentication" but set a default user, which (by way of
the `*` permissions in 359f37389a) is allowed to take all actions.
This requires switching to a reverse tunnel for the auth connection,
with the side effect that the `zulip_ops::teleport::node` manifest can
be applied on servers anywhere in the Internet; they do not need to
have any publicly-available open ports.
This means that services will only open their ports if they are
actually run, without having to clutter rules.v4 with a log of `if`
statements.
This does not go as far as using `puppetlabs/firewall`[1] because that
would represent an additional DSL to learn; raw IPtables sections can
easily be inserted into the generated iptables file via
`concat::fragment` (either inline, or as a separate file), but config
can be centralized next to the appropriate service.
[1] https://forge.puppet.com/modules/puppetlabs/firewall
Using puppet modules from the puppet forge judiciously will allow us
to simplify the configuration somewhat; this specifically pulls in the
stdlib module, which we were already using parts of.
This moves the `.asc` files into subdirectories, and writes out the
according `.list` files into them. It moves from templates to
written-out `.list` files for clarity and ease of
implementation (Debian and Ubuntu need different templates for
`zulip`), and as a way of making explicit which releases are supported
for each list. For the special-case of the PGroonga signing key, we
source an additional file within the directory.
This simplifies the process for adding another class of `.list` file.
Rather than duplicate logic from `computed_settings`, use the values
that were computed therein.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
Using the second branch _only_ for case (3), of a PostgreSQL server on
a different host, leaves it untested in CI. It also brings in an
unnecessary Django dependency.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
We only need to read the `zulip.conf` file to determine if we're using
PGROONGA if we are on the PostgreSQL machine, with no access to
Django.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
The only way in which "host" could be set is in cases (1) or (2), when
it was potentially read from Django's settings. In case (3), we
already know we are on the same host as the PostgreSQL server.
This unifies the two separated checks, which are actually the same
check.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
`deliver_scheduled_emails` and `deliver_scheduled_messages` use the
`ScheduledEmail` and `ScheduledMessage` tables as a queue,
effectively, pulling values off of them. As noted in their comments,
this is not safe to run on multiple hosts at once. As such, split out
the supervisor files for them.
These thresholds are in relationship to the
`autovacuum_freeze_max_age`, *not* the XID wraparound, which happens
at 2^31-1. As such, it is *perfectly normal* that they hit 100%, and
then autovacuum kicks in and brings it back down. The unusual
condition is that PostgreSQL pushes past the point where an autovacuum
would be triggered -- therein lies the XID wraparound danger.
With the `autovacuum_freeze_max_age` set to 2000000000 in
`postgresql.conf`, XID wraparound happens at 107.3%. Set the warning
and error thresholds to below this, but above 100% so this does not
trigger constantly.
This makes it parallel with deliver_scheduled_messages, and clarifies
that it is not used for simply sending outgoing emails (e.g. the
`email_senders` queue).
This also renames the supervisor job to match.
Matching the full process name (-x without -f) or full command
line (-xf) is less prone to mistakes like matching a random substring
of some other command line or pgrep matching itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `en_US.UTF-8` locale may not be configured or generated on all
installs; it also requires that the `locales` package be installed.
If users generate the `en_US.UTF-8` locale without adding it to the
permanent set of system locales, the generated `en_US.UTF-8` stops
working when the `locales` package is updated.
Switch to using `C.UTF-8` in all cases, which is guaranteed to be
installed.
Fixes#15819.
In puppet, we use pgrep in the collection stage, to see if rabbitmq is
running. Sufficiently bare-bones systems will not have
`procps` (which provides `pgrep`) installed yet, which makes the
install abort when running `puppet` for the first time.
Just installing the `procps` package in Puppet is insufficient,
because the check in the `unless` block runs when Puppet is
determining which resources it needs to instantiate, and in what
order; any package installation has yet to happen. As
`erlang-base` (which provides `epmd`) happens to have a dependency of
`procps`, any system without `pgrep` will also not have `epmd`
installed or running. Regardless, it is safe to run `epmd -daemon`
even if one is already running, as the comment above notes.
Using `pgrep -f epmd` to determine if `empd` is running is a race
condition with itself, since the pgrep is attempting to match the
"full process name" and its own full process name contains "epmd".
This leads to epmd not being started when it should be, which in turn
leads to rabbitmq-server failing to start.
Use the standard trick for this, namely a one-character character
class, to prevent self-matching.
We use the snakeoil TLS certificate for PostgreSQL and Postfix; some
VMs install the `ssl-cert` package but (reasonably) don't build the
snakeoil certs into the image.
Build them as needed.
Fixes#14955.
`uploads-route.noserve` and `uploads-route.internal` contained
identical location blocks for `/upload`, since differentiation was
necessary for Trusty until 33c941407b72; move the now-common sections
into `app`.
This the only differences between internal and S3 serving as a single
block which should be included or not based on config; move it to a
file which may or may not be placed in `app.d/`.
07779ea879 added an additional `proxy_set_header` of `X-Real-IP` to
`puppet/zulip/files/nginx/zulip-include-common/proxy`; as noted in
that commit, Tornado longpoll proxies already included such a line.
Unfortunately, this equates to setting that header _twice_ for Tornado
ports, like so:
```
X-Real-Ip: 198.199.116.58
X-Real-Ip: 198.199.116.58
```
...which is represented, once parsed by Django, as an IP of
`198.199.116.58, 198.199.116.58`. For IPv4, this odd "IP address" has
no problems, and appears in the access logs accordingly; for IPv6
addresses, however, its length is such that it overflows a call to
`getaddrinfo` when attempting to determine the validity of the IP.
Remove the now-duplicated inclusion of the header.
The `X-Forwarded-For` header is a list of proxies' IP addresses; each
proxy appends the remote address of the host it received its request
from to the list, as it passes the request down. A naïve parsing, as
SetRemoteAddrFromForwardedFor did, would thus interpret the first
address in the list as the client's IP.
However, clients can pass in arbitrary `X-Forwarded-For` headers,
which would allow them to spoof their IP address. `nginx`'s behavior
is to treat the addresses as untrusted unless they match an allowlist
of known proxies. By setting `real_ip_recursive on`, it also allows
this behavior to be applied repeatedly, moving from right to left down
the `X-Forwarded-For` list, stopping at the right-most that is
untrusted.
Rather than re-implement this logic in Django, pass the first
untrusted value that `nginx` computer down into Django via `X-Real-Ip`
header. This allows consistent IP addresses in logs between `nginx`
and Django.
Proxied calls into Tornado (which don't use UWSGI) already passed this
header, as Tornado logging respects it.
This verifies that the proxy is working by accessing a
highly-available website through it. Since failure of this equates to
failures of Sentry notifications and Android mobile push
notifications, this is a paging service.
All of `/var/log/nginx/` is chown'd to `zulip` and the nginx processes
themselves run as `nginx`, and would thus (on their own) create new
logfiles as `zulip`. Having `logrotate` create them as the package
default of `www-data` means that they are momentarily unreadable by
the `zulip` user just after rotation, which can cause problems with
logtail scripts.
Commit the standard `nginx` logrotate configuration, but with the
`zulip` user instead of the `www-data` user.
0663b23d54 changed zulip-puppet-apply to
use the venv, because it began using `yaml` to parse the output of
puppet to determine if changes would happen.
However, not every install ends with a venv; notably, non-frontend
servers do not have one. Attempting to run zulip-puppet-apply on them
hence now fails.
Remove this dependency on the venv, by installing a system
python3-yaml package -- though in reality, this package is already an
indirect dependency of the system. Especially since pyyaml is quite
stable, we're not using it in any interesting way, and it does not
actually add to the dependencies, it is preferable to parsing the YAML
by hand in this instance.
This reverts commit 211232978f. The
`rabbitmq` user does not exist yet on first install, and the goal is
to create the `rabbitmq-env.conf` file before the package is
installed.
In production, the `wildcard-zulipchat.com.combined-chain.crt` file is
just a symlink to the snakeoil certificates; but we do not puppet that
symlink, which makes new hosts fail to start cleanly. Instead, point
explicitly to the snakeoil certificate, and explain why.
Directives in `location` blocks may or may not inherit from
surrounding `location` blocks; specifically, `add_header` directives
do not[1]:
> There could be several add_header directives. These directives are
> inherited from the previous configuration level if and only if there
> are no add_header directives defined on the current level.
In order to maintain the same headers (including, critically,
`Access-Control-Allow-Origin`) as the surrounding block, all
`add_header` directives must thus be repeated (which includes the
`include`).
For clarity, un-nest and repeat the entire `location` block as was
used for `/static/`, but with the additional `add_header`. This is
preferred to the of an `if $request_uri` statement to add the header,
as those can have unexpected or undefined results[2].
[1] http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header
[2] https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
Redis is not nagios, and this only leads to confusion as to why there
is a nagios domain setting on frontend servers; it also leaves the
`redis0` part of the name buried in the template.
Switch to an explicit variable for the redis hostname.
This is more broadly useful than for just Kandra; provide
documentation and means to install Smokescreen for stand-alone
servers, and motivate its use somewhat more.
This means that in steady-state, `zulip-puppet-apply` is expected to
produce no changes or commands to execute. The verification step of
`setup-apt-repo` is quite fast, so this cleans up the output for very
little cost.
These optimizations only makes sense when all connections at a TCP
level are coming from the same host or set of hosts; as such, they
are only enabled if `loadbalancer.ips` is set in the `zulip.conf`.
This is required for unattended upgrades to actually run regularly.
In some distributions, it may be found in 20auto-upgrades, but placing
it here makes it more discoverable.
We haven't actively used this plugin in years, and so it was never
converted from the 2014-era monitoring to detect the hostname.
This seems worth fixing since we may want to migrate this logic to a
more modern monitoring system, and it's helpful to have it correct.
79931051bd allows outgoing emails from
localhost, but outgoing recipients are still subjected to virtualmaps.
This caused all outgoing email from Zulip with destination addresses
containing `.`, `+`, or starting with `mm`, to be redirected back
through the email gateway.
Bracket the virualmap addresses used for local delivery to the mail
gateway with a restriction on the domain matching the
`postfix.mailname` configuration, regex-escaped, so those only apply
to email destined for that domain.
The hostname is _not_ moved from `mydestination` to
`virtual_alias_domains`, as that would preclude delivery to
actually-local addresses, like `postmaster@`.
We run this tool at DEBUG log level in production, so we will still
see the notice on startup there; this avoids a spammy line in the
development environment output..
`wal-g wal-push` has a known bug with occasionally hanging after file
upload to S3[1]; set a rather long timeout on the upload process, so
that we don't simply stall forever when archiving WAL segments.
[1] https://github.com/wal-g/wal-g/issues/656
Logging `Host` is useful for determining access patterns to realms,
especially if ROOT_DOMAIN_LANDING_PAGE is set. Total response time is
useful in debugging access and performance patterns.
These are respected by `urllib`, and thus also `requests`. We set
`HTTP_proxy`, not `HTTP_PROXY`, because the latter is ignored in
situations which might be running under CGI -- in such cases it may be
coming from the `Proxy:` header in the request.
This provides a single reference point for all zulip.conf settings;
these mostly link out to the more complete documentation about each
setting, elsewhere.
Fixes#12490.
There is only one PostgreSQL database; the "appdb" is irrelevant.
Also use "postgresql," as it is the name of the software, whereas
"postgres" the name of the binary and colloquial name. This is minor
cleanup, but enabled by the other renames in the previous commit.
The "voyager" name is non-intuitive and not significant.
`zulip::voyager` and `zulip::dockervoyager` stubs are kept for
back-compatibility with existing `zulip.conf` files.
This moves the puppet configuration closer to the "roles and profiles
method"[1] which is suggested for organizing puppet classes. Notably,
here it makes clear which classes are meant to be able to stand alone
as deployments.
Shims are left behind at the previous names, for compatibility with
existing `zulip.conf` files when upgrading.
[1] https://puppet.com/docs/pe/2019.8/the_roles_and_profiles_method
This also removes direct includes of `zulip::common`, making
`zulip::base` gatekeep the inclusion of it. This helps enforce that
any top-level deploy only needs include a single class, and that any
configuration which is not meant to be deployed by itself will not
apply, due to lack of `zulip::common` include.
The following commit will better differentiate these top-level deploys
by moving them into a subdirectory.
Relying on `defined(Class['...'])` makes the class sensitive to
resource evaluation ordering, and thus brittle. It is also only
functional for a single service (thumbor).
Generalize by using `purge => true` for the directory to automatically
remove all un-managed files. This is more general than the previous
form, and may result in additional not-managed services being removed.
Restarting servers is what can cause service interruptions, and
increase risk. Add all of the servers that we use to the list of
ignored packages, and uncomment the default allowed-origins in order
to enable unattended upgrades.
d2aa81858c replaced the `apt::source` to set up debathena with
`Exec['setup-apt-repo-debathena']`, but mistakenly left the
`apt::source` in place in `zmirror` (but not `zmirror_personals`).
The `apt::source` resource type was later removed in c9d54f7854,
making the manifest to apply on `zmirror`.
Remove the broken and unnecessary `apt::source` resource.
This property is not related to the base zulip install; move it to
zulip::postgres_common, which is already used as a namespace for
various postgres variables.
There was likely more dependency complexity prior to 97766102df, but
there is now no reason to require that consumers explicitly include
zulip::apt_repository.
Use https://github.com/stripe/smokescreen to provide a server for an
outgoing proxy, run under supervisor. This will allow centralized
blocking of internal metadata IPs, localhost, and so forth, as well as
providing default request timeouts (10s by default).
We should eventually add templating for the set of hosts here, but
it's worth merging this change to remove the deleted hostname and
replace it with the current one.
Disabled on webservers in 047817b6b0, it has since lingered in
configuration, as well as running (to no effect) every minute on the
loadbalancer.
Remove the vestiges of its configuration.
This banner shows on lb1, advertising itself as lb0. There is no
compelling reason for a custom motd, especially one which needs to
be reconfigured for each host.
Since this was using repead individual get() calls previously, it
could not be monitored for having a consumer. Add it in, by marking
it of queue type "consumer" (the default), and adding Nagios lines for
it.
Also adjust missedmessage_emails to be monitored; it stopped using
LoopQueueProcessingWorker in 5cec566cb9, but was never added back
into the set of monitored consumers.
The rabbitmq cron jobs exist in order to call rabbitmqctl as root and
write the output to files that nagios can consume, since nagios is not
allowed to run rabbitmqctl.
In systems which do not have nagios configured, these every-minute
cron jobs add non-insignificant load, to no effect. Move their
installation into `zulip_ops`. In doing so, also combine the cron.d
files into a single file; this allows us to `ensure => absent` the old
filenames, removing them from existing systems. Leave the resulting
combined cron.d file in `zulip`, since it is still of general utility
and note.
The configuration change made in 1c17583ad5 only allowed delivery to
those specific Zulip addresses. However, they also prevent the
mailserver from being used as an outgoing email relay from Zulip,
since all mail that passed through the mailserver (from any
originator) was required to have a `RCPT TO` that matched those
regexes.
Allow mail originating from `mynetworks` to have an arbitrary
addresses in `RCPT TO`.
Use the validation of the tornado sharding config that
`stage_updated_sharding` does, by depending on it. This ensures that
we don't write out a supervisor or nginx config based on a
bad (e.g. non-sequential) list of tornado ports.
Fingerprinting the config is somewhat brittle -- it requires either
custom bootstrapping for old (fingerprint-less) configs, and may have
false-positives.
Since generating the config is lightweight, do so into the .tmp files,
and compare the output to the originals to determine if there are
changes to apply.
In order to both surface errors, as well as notify the user in case a
restart is necessary, we must run it twice. The `onlyif`
functionality cannot show configuration errors to the user, only
determine if the command runs or not. We thus run the command once,
judging errors as "interesting" enough to run the actual command,
whose failure will be verbose in Puppet and halt any steps that depend
on it.
Removing the `onlyif` would result in `stage_updated_sharding` showing
up in the output of every Puppet run, which obscures the important
messages it displays when an update to sharding is necessary.
Removing the `command` (e.g. making it an `echo`) would result in
removing the ability to report configuration errors. We thus have no
choice but to run it twice; this is thankfully low-overhead.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
Without an explicit port number, the `stdout_logfile` values for each
port are identical. Supervisor apparently decides that it will
de-conflict this by appending an arbitrary number to the end:
```
/var/log/zulip/tornado.log
/var/log/zulip/tornado.log.1
/var/log/zulip/tornado.log.10
/var/log/zulip/tornado.log.2
/var/log/zulip/tornado.log.3
/var/log/zulip/tornado.log.7
/var/log/zulip/tornado.log.8
/var/log/zulip/tornado.log.9
```
This is quite confusing, since most other files in `/var/log/zulip/`
use `.1` to mean logrotate was used. Also note that these are not all
sequential -- 4, 5, and 6 are mysteriously missing, though they were
used in previous restarts. This can make it extremely hard to debug
logs from a particular Tornado shard.
Give the logfiles a consistent name, and set them up to logrotate.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
Clients that close their socket to nginx suddenly also cause nginx to close
its connection to uwsgi. When uwsgi finishes computing the response,
it thus tries to write to a closed socket, and generates either
IOError or SIGPIPE failures.
Since these are caused by the _client_ closing the connection
suddenly, they are not actionable by the server. At particularly high
volumes, this could represent some sort of server-side failure;
however, this is better detected by examining status codes at the
loadbalancer. nginx uses the error code 499 for this occurrence:
https://httpstatuses.com/499
Stop uwsgi from generating this family of exception entirely, using
configuration for uwsgi[1]; it documents these errors as "(annoying),"
hinting at their general utility."
[1] https://uwsgi-docs.readthedocs.io/en/latest/Options.html#ignore-sigpipe
Increasing the uwsgi listen backlog is intended to allow it to handle
higher connection rates during server restart, when many clients may
be trying to connect. The kernel, in turn, needs to have a
proportionally increased somaxconn soas to not refuse the connection.
Set somaxconn to 2x the uwsgi backlog, but no lower than the
default (128).
Prior to PostgreSQL 12, the `recovery_target_timeline` setting is only
valid in a `recovery.conf` file, as that file has its own
configuration parser. As such, including it in `postgresql.conf`
results in an error, and PostgreSQL will fail to start.
Remove the setting, reverting bff3b540b1. This fixes PostgreSQL 9.5,
9.6, 10, and 11; while the setting is not an error in a PostgreSQL 12
configuration file, it is unnecessary since `latest` is the default.
7d4a370a57 attempted to move the replication check to on the
PostgreSQL hosts. While it updated the _check_ to assume it was
running and talking to a local PostgreSQL instance, the configuration
and installation for the check were not updated. As such, the check
ran on the nagios host for each DB host, and produced no output.
Start distributing the check to all apopdb hosts, and configure nagios
to use the SSH tunnel to get there.
wal-g was used in `puppet/zulip` by env-wal-g, but only installed in
`puppet/zulip_ops`.
Merge all of the dependencies of doing backups using wal-g (wal-g
installation, the pg_backup_and_purge job, the nagios plugin that
verifies it happens) into a common base class in `puppet/zulip`, since
it is generally useful.
No plugins are installed inside the /usr/local/munin/lib this creates
in munin-node, nor are they symlinked into /etc/munin/plugins, so
non-default plugins are added by this.
The one complexity is that hosts_fullstack are treated differently, as
they are not currently found in the manual `hosts` list, and as such
do not get munin monitoring.
check_memcached does not support memcached authentication even in its
latest release (it’s in a TODO item comment, and that’s it), and was
never particularly useful.
When supervisor is first installed, it is started automatically, and
creates the socket, owned by root. Subsequent reconfiguration in
puppet only calls `reread + update`, which is insufficient to apply
the `chown = zulip:zulip` line in `supervisord.conf`, leaving the
socket owned by `root` and the last part of the installation unable to
restart `supervisor` services as the `zulip` user. The `chown` line
in `scripts/lib/install` exists to paper over this.
Add a separate exec target for changes to `supervisord.conf` itself,
which restarts the full service. This leaves the default `restart`
action on the service for the lightweight `reread + update` action,
which is more common.
We use `systemctl` only on redhat-esque builds, because CI runs
Ubuntu, but init is not systemd in that context. `systemctl reload`
is sufficient to re-apply the socket ownership, but a full `restart`
and not `reload` is necessary under `/etc/init.d/supervisor`.
wal-g has a slihghtly different format than wal-e in its `backup-list`
output; it only contains three columns:
- `name`
- `last_modified`,
- `wal_segment_backup_start`
..rather than wal-e's plethora, most of which were blank:
- `name`
- `last_modified`
- `expanded_size_bytes`
- `wal_segment_backup_start`
- `wal_segment_offset_backup_start`
- `wal_segment_backup_stop`
- `wal_segment_offset_backup_stop`
Remove one argument from the split.
In Bionic, nagios-plugins-basic is a transitional package which
depends on monitoring-plugins-basic. In Focal, it is a virtual
package, which means that every time puppet runs, it tries to
re-install the nagios-plugins-basic package.
Switch all instances to referring to `$zulip::common::nagios_plugins`,
and repoint that to monitoring-plugins-basic.
Frontend hosts in multiple-host configurations (including docker
hosts) need a `psql` binary installed. ca9d27175b switched to not
setting `postgresql.version` in `zulip.conf`, which in turn means that
`$zulip::base::postgres_version` is unset. This, in turn, led to the
frontend hosts installing `postgresql-client-`, whose trailing dash
causes apt to _uninstall_ that package.
Unconditionally install `postgresql-client` with no explicit version
attached. This is a metapackage which depends on the latest client
package, which currently means it will install `postgresql-client-12`.
On single-host installs which have configured `postgresql.version` in
`zulip.conf` to be a lower version, this will result in
`postgresql-client-12` existing alongside another
version (e.g. `postgresql-client-10`); `psql` will give the most
recent. This is acceptable because the semantic meaning of the
postgresql version in `zulip.conf` is about the database engine
itself, not the command-line client.
Support for Xenial and Stretch was removed (5154ddafca, 0f4b1076ad,
8944e0ad53, 79acd5ae40, 1219a2e854), but not all codepaths were
updated to remove their conditionals on it.
Remove all code predicated on Xenial or Stretch. debathena support
was migrated to Bionic, since that appears to be the current state of
existing debathena servers.
This prevents memcached from automatically appending the hostname to
the username, which was a source of problems on servers where the
hostname was changed.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We would prefer to use the postgres packages from Postgres themselves,
if available. However, this requires ensures that, for existing
installs, we preserve the same version of postgres as their base
distribution installed.
Move the version-determination logic from being computed at puppet
interpolation time, to being computed at install time and pinned into
zulip.conf.
Since 9e8f1aacb3, zulip_ops machines
might have two Package declarations for `certbot`, which doesn't work
in puppet.
The fix is, as usual, to use our `zulip::safepackage` wrapper instead.
The style guide for Zulip is to always use "primary" and "replica"
when describing database replication. Adjust a few comments under
`puppet/` that do not adhere to this.
Unfortunately, some references still remain to the insensitive and
inaccurate "master" / "slave" terminology. However, these are only in
files which we are attempting to preserve as close to the upstream
versions they are derived from (e.g. postgresql.conf,
postfix/master.cf).