This is required for unattended upgrades to actually run regularly.
In some distributions, it may be found in 20auto-upgrades, but placing
it here makes it more discoverable.
We haven't actively used this plugin in years, and so it was never
converted from the 2014-era monitoring to detect the hostname.
This seems worth fixing since we may want to migrate this logic to a
more modern monitoring system, and it's helpful to have it correct.
79931051bd allows outgoing emails from
localhost, but outgoing recipients are still subjected to virtualmaps.
This caused all outgoing email from Zulip with destination addresses
containing `.`, `+`, or starting with `mm`, to be redirected back
through the email gateway.
Bracket the virualmap addresses used for local delivery to the mail
gateway with a restriction on the domain matching the
`postfix.mailname` configuration, regex-escaped, so those only apply
to email destined for that domain.
The hostname is _not_ moved from `mydestination` to
`virtual_alias_domains`, as that would preclude delivery to
actually-local addresses, like `postmaster@`.
We run this tool at DEBUG log level in production, so we will still
see the notice on startup there; this avoids a spammy line in the
development environment output..
`wal-g wal-push` has a known bug with occasionally hanging after file
upload to S3[1]; set a rather long timeout on the upload process, so
that we don't simply stall forever when archiving WAL segments.
[1] https://github.com/wal-g/wal-g/issues/656
Logging `Host` is useful for determining access patterns to realms,
especially if ROOT_DOMAIN_LANDING_PAGE is set. Total response time is
useful in debugging access and performance patterns.
These are respected by `urllib`, and thus also `requests`. We set
`HTTP_proxy`, not `HTTP_PROXY`, because the latter is ignored in
situations which might be running under CGI -- in such cases it may be
coming from the `Proxy:` header in the request.
This provides a single reference point for all zulip.conf settings;
these mostly link out to the more complete documentation about each
setting, elsewhere.
Fixes#12490.
There is only one PostgreSQL database; the "appdb" is irrelevant.
Also use "postgresql," as it is the name of the software, whereas
"postgres" the name of the binary and colloquial name. This is minor
cleanup, but enabled by the other renames in the previous commit.
The "voyager" name is non-intuitive and not significant.
`zulip::voyager` and `zulip::dockervoyager` stubs are kept for
back-compatibility with existing `zulip.conf` files.
This moves the puppet configuration closer to the "roles and profiles
method"[1] which is suggested for organizing puppet classes. Notably,
here it makes clear which classes are meant to be able to stand alone
as deployments.
Shims are left behind at the previous names, for compatibility with
existing `zulip.conf` files when upgrading.
[1] https://puppet.com/docs/pe/2019.8/the_roles_and_profiles_method
This also removes direct includes of `zulip::common`, making
`zulip::base` gatekeep the inclusion of it. This helps enforce that
any top-level deploy only needs include a single class, and that any
configuration which is not meant to be deployed by itself will not
apply, due to lack of `zulip::common` include.
The following commit will better differentiate these top-level deploys
by moving them into a subdirectory.
Relying on `defined(Class['...'])` makes the class sensitive to
resource evaluation ordering, and thus brittle. It is also only
functional for a single service (thumbor).
Generalize by using `purge => true` for the directory to automatically
remove all un-managed files. This is more general than the previous
form, and may result in additional not-managed services being removed.
Restarting servers is what can cause service interruptions, and
increase risk. Add all of the servers that we use to the list of
ignored packages, and uncomment the default allowed-origins in order
to enable unattended upgrades.
d2aa81858c replaced the `apt::source` to set up debathena with
`Exec['setup-apt-repo-debathena']`, but mistakenly left the
`apt::source` in place in `zmirror` (but not `zmirror_personals`).
The `apt::source` resource type was later removed in c9d54f7854,
making the manifest to apply on `zmirror`.
Remove the broken and unnecessary `apt::source` resource.
This property is not related to the base zulip install; move it to
zulip::postgres_common, which is already used as a namespace for
various postgres variables.
There was likely more dependency complexity prior to 97766102df, but
there is now no reason to require that consumers explicitly include
zulip::apt_repository.
Use https://github.com/stripe/smokescreen to provide a server for an
outgoing proxy, run under supervisor. This will allow centralized
blocking of internal metadata IPs, localhost, and so forth, as well as
providing default request timeouts (10s by default).
We should eventually add templating for the set of hosts here, but
it's worth merging this change to remove the deleted hostname and
replace it with the current one.
Disabled on webservers in 047817b6b0, it has since lingered in
configuration, as well as running (to no effect) every minute on the
loadbalancer.
Remove the vestiges of its configuration.
This banner shows on lb1, advertising itself as lb0. There is no
compelling reason for a custom motd, especially one which needs to
be reconfigured for each host.
Since this was using repead individual get() calls previously, it
could not be monitored for having a consumer. Add it in, by marking
it of queue type "consumer" (the default), and adding Nagios lines for
it.
Also adjust missedmessage_emails to be monitored; it stopped using
LoopQueueProcessingWorker in 5cec566cb9, but was never added back
into the set of monitored consumers.
The rabbitmq cron jobs exist in order to call rabbitmqctl as root and
write the output to files that nagios can consume, since nagios is not
allowed to run rabbitmqctl.
In systems which do not have nagios configured, these every-minute
cron jobs add non-insignificant load, to no effect. Move their
installation into `zulip_ops`. In doing so, also combine the cron.d
files into a single file; this allows us to `ensure => absent` the old
filenames, removing them from existing systems. Leave the resulting
combined cron.d file in `zulip`, since it is still of general utility
and note.
The configuration change made in 1c17583ad5 only allowed delivery to
those specific Zulip addresses. However, they also prevent the
mailserver from being used as an outgoing email relay from Zulip,
since all mail that passed through the mailserver (from any
originator) was required to have a `RCPT TO` that matched those
regexes.
Allow mail originating from `mynetworks` to have an arbitrary
addresses in `RCPT TO`.
Use the validation of the tornado sharding config that
`stage_updated_sharding` does, by depending on it. This ensures that
we don't write out a supervisor or nginx config based on a
bad (e.g. non-sequential) list of tornado ports.
Fingerprinting the config is somewhat brittle -- it requires either
custom bootstrapping for old (fingerprint-less) configs, and may have
false-positives.
Since generating the config is lightweight, do so into the .tmp files,
and compare the output to the originals to determine if there are
changes to apply.
In order to both surface errors, as well as notify the user in case a
restart is necessary, we must run it twice. The `onlyif`
functionality cannot show configuration errors to the user, only
determine if the command runs or not. We thus run the command once,
judging errors as "interesting" enough to run the actual command,
whose failure will be verbose in Puppet and halt any steps that depend
on it.
Removing the `onlyif` would result in `stage_updated_sharding` showing
up in the output of every Puppet run, which obscures the important
messages it displays when an update to sharding is necessary.
Removing the `command` (e.g. making it an `echo`) would result in
removing the ability to report configuration errors. We thus have no
choice but to run it twice; this is thankfully low-overhead.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
Without an explicit port number, the `stdout_logfile` values for each
port are identical. Supervisor apparently decides that it will
de-conflict this by appending an arbitrary number to the end:
```
/var/log/zulip/tornado.log
/var/log/zulip/tornado.log.1
/var/log/zulip/tornado.log.10
/var/log/zulip/tornado.log.2
/var/log/zulip/tornado.log.3
/var/log/zulip/tornado.log.7
/var/log/zulip/tornado.log.8
/var/log/zulip/tornado.log.9
```
This is quite confusing, since most other files in `/var/log/zulip/`
use `.1` to mean logrotate was used. Also note that these are not all
sequential -- 4, 5, and 6 are mysteriously missing, though they were
used in previous restarts. This can make it extremely hard to debug
logs from a particular Tornado shard.
Give the logfiles a consistent name, and set them up to logrotate.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
Clients that close their socket to nginx suddenly also cause nginx to close
its connection to uwsgi. When uwsgi finishes computing the response,
it thus tries to write to a closed socket, and generates either
IOError or SIGPIPE failures.
Since these are caused by the _client_ closing the connection
suddenly, they are not actionable by the server. At particularly high
volumes, this could represent some sort of server-side failure;
however, this is better detected by examining status codes at the
loadbalancer. nginx uses the error code 499 for this occurrence:
https://httpstatuses.com/499
Stop uwsgi from generating this family of exception entirely, using
configuration for uwsgi[1]; it documents these errors as "(annoying),"
hinting at their general utility."
[1] https://uwsgi-docs.readthedocs.io/en/latest/Options.html#ignore-sigpipe
Increasing the uwsgi listen backlog is intended to allow it to handle
higher connection rates during server restart, when many clients may
be trying to connect. The kernel, in turn, needs to have a
proportionally increased somaxconn soas to not refuse the connection.
Set somaxconn to 2x the uwsgi backlog, but no lower than the
default (128).
Prior to PostgreSQL 12, the `recovery_target_timeline` setting is only
valid in a `recovery.conf` file, as that file has its own
configuration parser. As such, including it in `postgresql.conf`
results in an error, and PostgreSQL will fail to start.
Remove the setting, reverting bff3b540b1. This fixes PostgreSQL 9.5,
9.6, 10, and 11; while the setting is not an error in a PostgreSQL 12
configuration file, it is unnecessary since `latest` is the default.
7d4a370a57 attempted to move the replication check to on the
PostgreSQL hosts. While it updated the _check_ to assume it was
running and talking to a local PostgreSQL instance, the configuration
and installation for the check were not updated. As such, the check
ran on the nagios host for each DB host, and produced no output.
Start distributing the check to all apopdb hosts, and configure nagios
to use the SSH tunnel to get there.