The `needrestart` tool added in 22.04 is useful in terms of listing
which services may need to be restarted to pick up updated libraries.
However, it prompts about the current state of services needing
restart for *every* subsequent `apt-get upgrade`, and defaulting core
services to restarting requires carefully manually excluding them
every time, at risk of causing an unscheduled outage.
Build a list of default-off services based on the list in
unattended-upgrades.
The default value in uwsgi is 4k; receiving more than this amount from
nginx leads to a 502 response (though, happily, the backend uwsgi does not
terminate).
ab18dbfde5 originally increased it from the unstated uwsgi default
of 4096, to 8192; b1da797955 made it configurable, in order to allow
requests from clients with many cookies, without causing 502's[1].
nginx defaults to a limitation of 1k, with 4 additional 8k header
lines allowed[2]; any request larger than that returns a response of
`400 Request Header Or Cookie Too Large`. The largest header size
theoretically possible from nginx, by default, is thus 33k, though
that would require packing four separate headers to exactly 8k each.
Remove the gap between nginx's limit and uwsgi's, which could trigger
502s, by removing the uwsgi configurability, and setting a 64k size in
uwsgi (the max allowable), which is larger than nginx's default limit.
uWSGI's documentation of `buffer-size` ([3], [4]) also notes that "It
is a security measure too, so adapt to your app needs instead of
maxing it out." Python has no security issues with buffers of 64k,
and there is no appreciable memory footprint difference to having a
larger buffer available in uwsgi.
[1]: https://chat.zulip.org/#narrow/stream/31-production-help/topic/works.20in.20Edge.20not.20Chrome/near/719523
[2]: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
[3]: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
[4]: https://uwsgi-docs.readthedocs.io/en/latest/Options.html#buffer-size
Support for this header was removed in Chrome 78, Safari 15.4, and
Edge 17. It was never supported in Firefox.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This check loads Django, and as such must be run as the zulip user.
Repeat the same pattern used elsewhere in nagios, of writing a state
file, which is read by `check_cron_file`.
Replication checks should only run on primary and replicas, not
standalone hosts; while `autovac_freeze` currently only runs on
primary hosts, it functions identically on replicas, and is fine to
run there.
Make `autovac_freeze` run on all `postgresql` hosts, and make
standalone hosts no longer `postgres_primary`, so they do not fail the
replication tests.
These style of checks just look for matching process names using
`check_remote_arg_string`, which dates to 8edbd64bb8. These were
added because the original two (`missedmessage_emails` and
`slow_queries`) did not create consumers, instead polling for events.
Switch these to checking the queue consumer counts that the
`check-rabbitmq-consumers` check is already writing out. Since the
`missedmessage_emails` was _already_ checked via the consumer check, a
duplicate is not added.
Even the `pageable_servers` group did not page for high load -- in
part because what was "high" depends on the servers. Set slightly
better limits based on server role.
`zmirror` itself was `zmirror_main` + `zmirrorp` but was unused; we
consistently just use the term `zmirror` for the non-personals server,
so use it as the hostgroup name.
The Redis nagios checks themselves are done against `redis` +
`frontends` groups, so there is no need to misleadingly place
`frontends` in the `redis` hostgroup.
5abf4dee92 made this distinction, then multitornado_frontends was
never used; the singletornado_frontends alerting worked even for the
multiple-Tornado instances.
Remove the useless and misleading distinction.
Even if Django and PostgreSQL are on the same host, the `nagios` user
may lack permissions to read accessory configuration files needed to
load the Django configuration (e.g. authentication keys).
Catch those failures, and switch to loading the required settings from
`/etc/zulip/zulip.conf`.
Without this, uwsgi does not release the GIL before going back into
`epoll_wait` to wait for the next request. This results in any
background threads languishing, unserviced.[1]
Practically, this results in Sentry background reporter threads timing
out when attempting to post results -- but only in situations with low
traffic, as in those significant time is spent in `epoll_wait`. This
is seen in logs as:
WARN [urllib3.connectionpool] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /api/123456789/envelope/
Or:
WARN [urllib3.connectionpool] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))': /api/123456789/envelope/
Sentry attempts to detect this and warn, but due to startup ordering,
the warning is not printed without lazy-loading.
Enable threads, at a miniscule performance cost, in order to support
background workers like Sentry[2].
[1] https://github.com/unbit/uwsgi/issues/1141#issuecomment-169042767
[2] https://docs.sentry.io/clients/python/advanced/#a-note-on-uwsgi
This is a reprise of c97162e485, but for the case where certbot
certs are no longer in use by way of enabling `http_only` and letting
another server handle TLS termination.
Fixes: #22034.
This allows system-level configuration to be done by `apt-get install`
of nginx modules, which place their load statements in this directory.
The initial import in ed0cb0a5f8 of the stock nginx config omitted
this include -- one potential explanation was in an effort to reduce
the memory footprint of the server.
The default nginx install enables:
50-mod-http-auth-pam.conf
50-mod-http-dav-ext.conf
50-mod-http-echo.conf
50-mod-http-geoip2.conf
50-mod-http-geoip.conf
50-mod-http-image-filter.conf
50-mod-http-subs-filter.conf
50-mod-http-upstream-fair.conf
50-mod-http-xslt-filter.conf
50-mod-mail.conf
50-mod-stream.conf
While Zulip doesn't actively use any of these, they likely don't do
any harm to simply be loaded -- they are loaded into every nginx by
default.
Having the `modules-enabled` include allows easier extension of the
server, as neither of the existing wildcard
includes (`/etc/nginx/conf.d/*.conf` and
`/etc/nginx/zulip-include/app.d/*.conf`) are in the top context, and
thus able to load modules.
54b6a83412 fixed the typo introduced in 49ad188449, but that does
not clean up existing installs which had the file with the wrong name
already.
Remove the file with the typo'd name, so two jobs do not race, and fix
the typo in the comment.
The top-level `chdir` setting only does the chdir once, at initial
`uwsgi` startup time. Rolling restarts, however, however, require
that `uwsgi` pick up the _new_ value of the `current` directory, and
start new workers in that directory -- as currently implemented,
rolling restarts cannot restart into newer versions of the code, only
the same one in which they were started.
Use [configurable hooks][1] to execute the `chdir` after every fork.
This causes the following behaviour:
```
Thu May 12 18:56:55 2022 - chain reload starting...
Thu May 12 18:56:55 2022 - chain next victim is worker 1
Gracefully killing worker 1 (pid: 1757689)...
worker 1 killed successfully (pid: 1757689)
Respawned uWSGI worker 1 (new pid: 1757969)
Thu May 12 18:56:56 2022 - chain is still waiting for worker 1...
running "chdir:/home/zulip/deployments/current" (post-fork)...
Thu May 12 18:56:57 2022 - chain is still waiting for worker 1...
Thu May 12 18:56:58 2022 - chain is still waiting for worker 1...
Thu May 12 18:56:59 2022 - chain is still waiting for worker 1...
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x55dfca409170 pid: 1757969 (default app)
Thu May 12 18:57:00 2022 - chain next victim is worker 2
[...]
```
..and so forth down the line of processes. Each process is correctly
started in the _current_ value of `current`, and thus picks up the
correct code.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/Hooks.html
Our current EC2 systems don’t have an interface named ‘eth0’, and if
they did, this script would do nothing but crash with ImportError
because we have never installed boto.utils for Python 3.
(The message of commit 2a4d851a7c made
an effort to document for future researchers why this script should
not have been blindly converted to Python 3. However, commit
2dc6d09c2a (#14278) was evidently
unresearched and untested.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
6f5ae8d13d removed the `$replication` variable from the
configurations of PostgreSQL 12 and higher, but left it in the
templates for PostgreSQL 10 and 11. Because `undef != ''`,
deployments on PostgreSQL 10 and 11 started trying to push to S3
backups, regardless of if they were configured, leaving frequent log
messages like:
```
2022-04-30 12:45:47.805 UTC [626d24ec.1f8db0]: [107-1] LOG: archiver process (PID 2086106) exited with exit code 1
2022-04-30 12:45:49.680 UTC [626d24ee.1f8dc3]: [18-1] LOG: checkpoint complete: wrote 19 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.910 s, sync=0.022 s, total=1.950 s; sync files=16, longest=0.018 s, average=0.002 s; distance=49 kB, estimate=373 kB
/usr/bin/timeout: failed to run command "/usr/local/bin/env-wal-g": No such file or directory
2022-04-30 12:46:17.852 UTC [626d2f99.1fd4e9]: [1-1] FATAL: archive command failed with exit code 127
2022-04-30 12:46:17.852 UTC [626d2f99.1fd4e9]: [2-1] DETAIL: The failed archive command was: /usr/bin/timeout 10m /usr/local/bin/env-wal-g wal-push pg_wal/000000010000000300000080
```
Switch the PostgreSQL 10 and 11 configuration to check
`s3_backups_bucket`, like the other versions.
It is possible to have previously installed certbot, but switched back
to using self-signed certificates -- in which case renewing them using
certbot may fail.
Verify that the certificate is a symlink into certbot's output
directory before running `fix-standalone-certbot`.
Commit f6d27562fa (#21564) tried to
ensure Chrony is running, which fails in containers where Chrony
doesn’t have permission to update the host clock.
The Debian package should still attempt to start it, and Puppet should
still restart it when chrony.conf is modified.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Since wal-g does not provide binaries for aarch64, build them from
source. While building them from source for arm64 would better ensure
that build process is tested, the build process takes 7min and 700M of
temp files, which is an unacceptable cost; we thus only build on
aarch64.
Since the wal-g build process uses submodules, which are not in the
Github export, we clone the full wal-g repository. Because the
repository is relatively small, we clone it anew on each new version,
rather than attempt to manage the remotes.
Fixes#21070.
The default timeout for `exec` commands in Puppet is 5 minutes[1]. On
slow connections, this may not be sufficient to download larger
downloads, such as the ~135MB golang tarball.
Increase the timeout to 10 minutes; this is a minimum download speed
of is ~225kB/s.
Fixes#21449.
[1]: https://puppet.com/docs/puppet/5.5/types/exec.html#exec-attribute-timeout
This commit adds a cron job which runs every hour to add the users to
full members system group if user is promoted to a full member.
This should ensure that full member status is available no more than
an hour after configuration suggests it should be.
Previously, it was possible to configure `wal-g` backups without
replication enabled; this resulted in only daily backups, not
streaming backups. It was also possible to enable replication without
configuring the `wal-g` backups bucket; this simply failed to work.
Make `wal-g` backups always streaming, and warn loudly if replication
is enabled but `wal-g` is not configured.
It would confuse a future Debian 15.10 release with Ubuntu 15.10, it
relies on the legacy fact $::operatingsystemrelease, the modern fact
$::os provides this information without extra logic, and it’s unused
as of commit 03bffd3938.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Zulip writes a `rabbitmq.config` configuration file which locks down
RabbitMQ to listen only on localhost:5672, as well as the RabbitMQ
distribution port, on localhost:25672.
The "distribution port" is part of Erlang's clustering configuration;
while it is documented that the protocol is fundamentally
insecure ([1], [2]) and can result in remote arbitrary execution of
code, by default the RabbitMQ configuration on Debian and Ubuntu
leaves it publicly accessible, with weak credentials.
The configuration file that Zulip writes, while effective, is only
written _after_ the package has been installed and the service
started, which leaves the port exposed until RabbitMQ or system
restart.
Ensure that rabbitmq's `/etc/rabbitmq/rabbitmq.config` is written
before rabbitmq is installed or starts, and that changes to that file
trigger a restart of the service, such that the ports are only ever
bound to localhost. This does not mitigate existing installs, since
it does not force a rabbitmq restart.
[1] https://www.erlang.org/doc/apps/erts/erl_dist_protocol.html
[2] https://www.erlang.org/doc/reference_manual/distributed.html#distributed-erlang-system
This is required in order to lock down the RabbitMQ port to only
listen on localhost. If the nodename is `rabbit@hostname`, in most
circumstances the hostname will resolve to an external IP, which the
rabbitmq port will not be bound to.
Installs which used `rabbit@hostname`, due to RabbitMQ having been
installed before Zulip, would not have functioned if the host or
RabbitMQ service was restarted, as the localhost restrictions in the
RabbitMQ configuration would have made rabbitmqctl (and Zulip cron
jobs that call it) unable to find the rabbitmq server.
The previous commit ensures that configure-rabbitmq is re-run after
the nodename has changed. However, rabbitmq needs to be stopped
before `rabbitmq-env.conf` is changed; we use an `onlyif` on an `exec`
to print the warning about the node change, and let the subsequent
config change and notify of the service and configure-rabbitmq to
complete the re-configuration.
`/etc/rabbitmq/rabbitmq-env.conf` sets the nodename; anytime the
nodename changes, the backing database changes, and this requires
re-creating the rabbitmq users and permissions.
Trigger this in puppet by running configure-rabbitmq after the file
changes.
The Erlang `epmd` daemon listens on port 4369, and provides
information (without authentication) about which Erlang processes are
listening on what ports. This information is not itself a
vulnerability, but may provide information for remote attackers about
what local Erlang services (such as `rabbitmq-server`) are running,
and where.
`epmd` supports an `ERL_EPMD_ADDRESS` environment variable to limit
which interfaces it binds on. While this environment variable is set
in `/etc/default/rabbitmq-server`, Zulip unfortunately attempts to
start `epmd` using an explicit `exec` block, which ignores those
settings.
Regardless, this lack of `ERL_EPMD_ADDRESS` variable only controls
`epmd`'s startup upon first installation. Upon reboot, there are two
ways in which `epmd` might be started, neither of which respect
`ERL_EPMD_ADDRESS`:
- On Focal, an `epmd` service exists and is activated, which uses
systemd's configuration to choose which interfaces to bind on, and
thus `ERL_EPMD_ADDRESS` is irrelevant.
- On Bionic (and Focal, due to a broken dependency from
`rabbitmq-server` to `epmd@` instead of `epmd`, which may lead to
the explicit `epmd` service losing a race), `epmd` is started by
`rabbitmq-server` when it does not detect a running instance.
Unfortunately, only `/etc/init.d/rabbitmq-server` would respects
`/etc/default/rabbitmq-server` -- and it defers the actual startup
to using systemd, which does not pass the environment variable
down. Thus, `ERL_EPMD_ADDRESS` is also irrelevant here.
We unfortunately cannot limit `epmd` to only listening on localhost,
due to a number of overlapping bugs and limitations:
- Manually starting `epmd` with `-address 127.0.0.1` silently fails
to start on hosts with IPv6 disabled, due to an Erlang bug ([1],
[2]).
- The dependencies of the systemd `rabbitmq-server` service can be
fixed to include the `epmd` service, and systemd can be made to
bind to `127.0.0.1:4369` and pass that socket to `epmd`, bypassing
the above bug. However, the startup of this service is not
guaranteed, because it races with other sources of `epmd` (see
below).
- Any process that runs `rabbitmqctl` results in `epmd` being started
if one is not currently running; these instances do not respect any
environment variables as to which addresses to bind on. This is
also triggered by `service rabbitmq-server status`, as well as
various Zulip cron jobs which inspect the rabbitmq queues. As
such, it is difficult-to-impossible to ensure that some other
`epmd` process will not win the race and open the port on all
interfaces.
Since the only known exposure from leaving port 4369 open is
information that rabbitmq is running on the host, and the complexity
of adjusting this to only bind on localhost is high, we remove the
setting which does not address the problem, and document that the port
is left open, and should be protected via system-level or
network-level firewalls.
[1]: https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1374109
[2]: https://github.com/erlang/otp/issues/4820
mochiweb was renamed to web_dispatch in RabbitMQ 3.8.0, and the plugin
is not enabled. Nor does this control the management interface, which
would listen on port 15672.
This addresses the problems mentioned in the previous commit, but for
existing installations which have `authenticator = standalone` in
their configurations.
This reconfigures all hostnames in certbot to use the webroot
authenticator, and attempts to force-renew their certificates.
Force-renewal is necessary because certbot contains no way to merely
update the configuration. Let's Encrypt allows for multiple extra
renewals per week, so this is a reasonable cost.
Because the certbot configuration is `configobj`, and not
`configparser`, we have no way to easily parse to determine if webroot
is in use; additionally, `certbot certificates` does not provide this
information. We use `grep`, on the assumption that this will catch
nearly all cases.
It is possible that this will find `authenticator = standalone`
certificates which are managed by Certbot, but not Zulip certificates.
These certificates would also fail to renew while Zulip is running, so
switching them to use the Zulip webroot would still be an improvement.
Fixes#20593.
As a consequence:
• Bump minimum supported Python version to 3.7.
• Move Vagrant environment to Debian 10, which has Python 3.7.
• Move CI frontend tests to Debian 10.
• Move production build test to Debian 10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Doing so requires protecting /metrics from direct access when proxied
through nginx. If camo is placed on a separate host, the equivalent
/metrics URL may need to be protected.
See https://github.com/cactus/go-camo#metrics for details on the
statistics so reported. Note that 5xx responses are _expected_ from
go-camo's statistics, as it returns 502 status code when the remote
server responds with 500/502/503/504, or 504 when the remote host
times out.
Because Camo includes logic to deny access to private subnets, routing
its requests through Smokescreen is generally not necessary. However,
it may be necessary if Zulip has configured a non-Smokescreen exit
proxy.
Default Camo to using the proxy only if it is not Smokescreen, with a
new `proxy.enable_for_camo` setting to override this behaviour if need
be. Note that that setting is in `zulip.conf` on the host with Camo
installed -- not the Zulip frontend host, if they are different.
Fixes: #20550.
For `no_serve_uploads`, `http_only`, which previously specified
"non-empty" to enable, this tightens what values are true. For
`pgroonga` and `queue_workers_multiprocess`, this broadens the
possible values from `enabled`, and `true` respectively.
Restarting the uwsgi processes by way of supervisor opens a window
during which nginx 502's all responses. uwsgi has a configuration
called "chain reloading" which allows for rolling restart of the uwsgi
processes, such that only one process at once in unavailable; see
uwsgi documentation ([1]).
The tradeoff is that this requires that the uwsgi processes load the
libraries after forking, rather than before ("lazy apps"); in theory
this can lead to larger memory footprints, since they are not shared.
In practice, as Django defers much of the loading, this is not as much
of an issue. In a very basic test of memory consumption (measured by
total memory - free - caches - buffers; 6 uwsgi workers), both
immediately after restarting Django, and after requesting `/` 60 times
with 6 concurrent requests:
| Non-lazy | Lazy app | Difference
------------------+------------+------------+-------------
Fresh | 2,827,216 | 2,870,480 | +43,264
After 60 requests | 3,332,284 | 3,409,608 | +77,324
..................|............|............|.............
Difference | +505,068 | +539,128 | +34,060
That is, "lazy app" loading increased the footprint pre-requests by
43MB, and after 60 requests grew the memory footprint by 539MB, as
opposed to non-lazy loading, which grew it by 505MB. Using wsgi "lazy
app" loading does increase the memory footprint, but not by a large
percentage.
The other effect is that processes may be served by either old or new
code during the restart window. This may cause transient failures
when new frontend code talks to old backend code.
Enable chain-reloading during graceful, puppetless restarts, but only
if enabled via a zulip.conf configuration flag.
Fixes#2559.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#chain-reloading-lazy-apps
Fix another tidy error caused by 1e4e6a09af23; as also noted in
f9a39b6703, these resources are necessary such that tidy does not
cleanup of smokescreen, and then force a recompilation of it again.
1e4e6a09af removed the resources for the unpacked directory, on the
argument that they were unnecessary. However, the directory (or file,
see below) that is unpacked must be managed, or it will be tidied on
the next puppet apply.
Add back the resource for `$dir`, but mark it `ensure => present`, to
support tarballs which only unpack to a single file (e.g. wal-g).
The homedir of a user cannot be changed if any processes are running
as them, so having it change over time as upgrades happen will break
puppet application, as the old grafana process under supervisor will
effectively lock changes to the user's homedir.
Unfortunately, that means that this change will thus fail to
puppet-apply unless `supervisorctl stop grafana` is run first, but
there's no way around that.
In the event that extracting doesn't produce the binary we expected it
to, all this will do is create an _empty_ file where we expect the
binary to be. This will likely muddle debugging.
Since the only reason the resourfce was made in the first place was to
make dependencies clear, switch to depending on the External_Dep
itself, when such a dependency is needed.
ab130ceb35 added a dependency on scripts.lib.zulip_tools; however,
check_postgresql_replication_lag is run on hosts which do not have a
zulip tree installed.
Inline the simple functions that were imported.
It should not use the configured zulip username, but should instead
pull from the login user (likely `nagios`), or an explicit alternate
provided PostgreSQL username. Failure to do so results in Nagios
failures because the `nagios` login does not have permissions to
authenticated the `zulip` PostgreSQL user.
This requires CI changes, as the install tests install as the `zulip`
login username, which allowed Nagios tests to pass previously; with
the custom database and username, however, they must be passed to
process_fts_updates explicitly when validating the install.
The Redis configuration, and the systemd file for it, assumes there
will be a pid file written to `/var/run/redis/redis.pid`, but
`/var/run/redis` is not created during installation.
Create `/run/redis`; as `/var/run` is a symlink to `/run` on systemd
systems, this is equivalent to `/var/run/redis`.
The systemd config file installed by the `memcached` package assumes
there will be a PID written to `/run/memcached/memcached.pid`. Since we
override `memcached.conf`, we have omitted the line that writes out the
PID to this file.
Systemd is smart enough to not _need_ the PID file to start up the
service correctly, but match the configuration. We create the
directory since the package does not do so. It is created as
`/run/memcached` and not `/var/run/memcached` because `/var/run` is a
symlink to `/run`.
The certbot package installs its own systemd timer (and cron job,
which disabled itself if systemd is enabled) which updates
certificates. This process races with the cron job which Zulip
installs -- the only difference being that Zulip respects the
`certbot.auto_renew` setting, and that it passes the deploy hook.
This means that occasionally nginx would not be reloaded, when the
systemd timer caught the expiration first.
Remove the custom cron job and `certbot-maybe-renew` script, and
reconfigure certbot to always reload nginx after deploying, using
certbot directory hooks.
Since `certbot.auto_renew` can't have an effect, remove the setting.
In turn, this removes the need for `--no-zulip-conf` to
`setup-certbot`. `--deploy-hook` is similarly removed, as running
deploy hooks to restart nginx is now the default; pass
`--no-directory-hooks` in standalone mode to not attempt to reload
nginx. The other property of `--deploy-hook`, of skipping symlinking
into place, is given its own flog.
PostgreSQL 11 and below used a configuration file names
`recovery.conf` to manage replicas and standbys; support for this was
removed in PostgreSQL 12[1], and the configuration parameters were
moved into the main `postgresql.conf`.
Add `zulip.conf` settings for the primary server hostname and
replication username, so that the complete `postgresql.conf`
configuration on PostgreSQL 14 can continue to be managed, even when
replication is enabled. For consistency, also begin writing out the
`recovery.conf` for PostgreSQL 11 and below.
In PostgreSQL 12 configuration and later, the `wal_level =
hot_standby` setting is removed, as `hot_standby` is equivalent to
`replica`, which is the default value[2]. Similarly, the
`hot_standby = on` setting is also the default[3].
Documentation is added for these features, and the commentary on the
"Export and Import" page referencing files under `puppet/zulip_ops/`
is removed, as those files no longer have any replication-specific
configuration.
[1]: https://www.postgresql.org/docs/current/recovery-config.html
[2]: https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-WAL-LEVEL
[3]: https://www.postgresql.org/docs/12/runtime-config-replication.html#GUC-HOT-STANDBY
These are both unsupported by PostgreSQL itself, as well as by Zulip;
the removal of Ubuntu Xenial and Debian Stretch support in Zulip 3.0
removed the requirement for PostgreSQL 9.6, and the previous versions
date back yet farther.
Writing the secret to the supervisor configuration file makes changes
to the secret requires a zulip-puppet-apply to take hold. The Docker
image is constructed to avoid having to run zulip-puppet-apply on
startup, and indeed cannot run zulip-puppet-apply after having
configured secrets, as it has replaced the zulip.conf file with a
symlink, for example. This means that camo gets the static secret
that was built into the image, and not the one regenerated on first
startup.
Read the camo secret at process startup time. Because this pattern is
likely common with "12-factor" applications which can read from
environment variables, write a generic tool to map secrets to
environment variables before exec'ing a binary, and use that for Camo.
The default in the previous commit, inherited from camo, was to bind
to 0.0.0.0:9292. In standalone deployments, camo is deployed on the
same host as the nginx reverse proxy, and as such there is no need to
open it up to other IPs.
Make `zulip::camo` take an optional parameter, which allows overriding
it in puppet, but skips a `zulip.conf` setting for it, since it is
unlikely to be adjust by most users.
The upstream of the `camo` repository[1] has been unmaintained for
several years, and is now archived by the owner. Additionally, it has
a number of limitations:
- It is installed as a sysinit service, which does not run under
Docker
- It does not prevent access to internal IPs, like 127.0.0.1
- It does not respect standard `HTTP_proxy` environment variables,
making it unable to use Smokescreen to prevent the prior flaw
- It occasionally just crashes, and thus must have a cron job to
restart it.
Swap camo out for the drop-in replacement go-camo[2], which has the
same external API, requiring not changes to Django code, but is more
maintained. Additionally, it resolves all of the above complaints.
go-camo is not configured to use Smokescreen as a proxy, because its
own private-IP filtering prevents using a proxy which lies within that
IP space. It is also unclear if the addition of Smokescreen would
provide any additional protection over the existing IP address
restrictions in go-camo.
go-camo has a subset of the security headers that our nginx reverse
proxy sets, and which camo set; provide the missing headers with `-H`
to ensure that go-camo, if exposed from behind some other non-nginx
load-balancer, still provides the necessary security headers.
Fixes#18351 by moving to supervisor.
Fixeszulip/docker-zulip#298 also by moving to supervisor.
[1] https://github.com/atmos/camo
[2] https://github.com/cactus/go-camo
This is an additional security hardening step, to make Zulip default
to preventing SSRF attacks. The overhead of running Smokescreen is
minimal, and there is no reason to force deployments to take
additional steps in order to secure themselves against SSRF attacks.
Deployments which already have a different external proxy configured
will not gain a local Smokescreen installation, and running without
Smokescreen is supported by explicitly unsetting the `host` or `port`
values in `/etc/zulip/zulip.conf`.
In a subsequent commit, we intend to include it from
`zulip::app_frontend_base`, which is a layering violation if it only
exists in the form of a profile.
This will cause the output binary path to be sensitive to golang
version, causing it to be rebuilt on new golang, and an updated
supervisor config file written out, and thus supervisor also
restarted.
As with the previous commit for `/srv/golang`, we have the custom of
namespacing things under `/srv` with `zulip-` to help ensure that we
play nice with anything else that happens to be on the host.
We have the custom of namespacing things under `/srv` with `zulip-`
to help ensure that we play nice with anything else that happens
to be on the host.
It is possible to be in recovery, and downloading WAL logs from
archives, and not yet be replicating. If one only checks the
streaming log status, it reports as "no replicas" which is technically
accurate but not a useful summation of the state of the replica.
The `cron` resource places its contents in the user's crontab, which
makes it unlike every other cron job that Zulip installs.
Switch to using `/etc/cron.d` files, like all other cron jobs.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Since Supervisor 4, which is installed on Ubuntu 20.04 and Debian 11,
`supervisorctl status` returns exit code 3 if any of the
supervisor-controlled processes are not running.
Using `supervisorctl status` as the Puppet `status` command for
Supervisor leads to unnecessarily trying to "start" a Supervisor
process which is already started, but happens to have one or more of
its managed processes stopped. This is an unnecessary no-op in
production environments, but in docker-init enviroments, such as in
CI, attempting to start the process a second time is an error.
Switch to checking if supervisor is running by way of sysv init. This
fixes the potential error in CI, as well as eliminates unnecessary
"starts" of supervisor when it was already running -- a situation
which made zulip-puppet-apply not idempotent:
```
root@alexmv-prod:~# supervisorctl status
process-fts-updates STOPPED Nov 10 12:33 AM
smokescreen RUNNING pid 1287280, uptime 0:35:32
zulip-django STOPPED Nov 10 12:33 AM
zulip-tornado STOPPED Nov 10 12:33 AM
[...]
root@alexmv-prod:~# ~zulip/deployments/current/scripts/zulip-puppet-apply --force
Notice: Compiled catalog for alexmv-prod.zulipdev.org in environment production in 2.32 seconds
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
Notice: Applied catalog in 0.91 seconds
root@alexmv-prod:~# ~zulip/deployments/current/scripts/zulip-puppet-apply --force
Notice: Compiled catalog for alexmv-prod.zulipdev.org in environment production in 2.35 seconds
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
Notice: Applied catalog in 0.92 seconds
```
In the series of migrations to this tool's configuration to support
specifying an arbitrary database name
(e.g. c17f502bb0), we broke support for
running process_fts_updates on the application server, connected to a
remote database server. That workflow is used by docker-zulip and
presumably other settings like Amazon RDS.
The fix is to import the Zulip virtualenv (if available) when running
on an application server. This is better than just supporting this
case, since both docker-zulip and an Amazon RDS database are setting
where it would be inconvenient to run process-fts-updates directly on
the database server. (In the former case, because we want to avoid
having a strong version dependency on the postgres container).
Details are available in this conversation:
https://chat.zulip.org/#narrow/stream/49-development-help/topic/Logic.20in.20process_fts_updates.20seems.20to.20be.20broken/near/1251894
Thanks to Erik Tews for reporting and help in debugging this issue.
We previously used `zulip-puppet-apply` with a custom config file,
with an updated PostgreSQL version but more limited set of
`puppet_classes`, to pre-create the basic settings for the new cluster
before running `pg_upgradecluster`.
Unfortunately, the supervisor config uses `purge => true` to remove
all SUPERVISOR configuration files that are not included in the puppet
configuration; this leads to it removing all other supervisor
processes during the upgrade, only to add them back and start them
during the second `zulip-puppet-apply`.
It also leads to `process-fts-updates` not being started after the
upgrade completes; this is the one supervisor config file which was
not removed and re-added, and thus the one that is not re-started due
to having been re-added. This was not detected in CI because CI added
a `start-server` command which was not in the upgrade documentation.
Set a custom facter fact that prevents the `purge` behaviour of the
supervisor configuration. We want to preserve that behaviour in
general, and using `zulip-puppet-apply` continues to be the best way
to pre-set-up the PostgreSQL configuration -- but we wish to avoid
that behaviour when we know we are applying a subset of the puppet
classes.
Since supervisor configs are no longer removed and re-added, this
requires an explicit start-server step in the instructions after the
upgrades complete. This brings the documentation into alignment with
what CI is testing.
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
In an initial install, the following is a potential rule ordering:
```
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/conf.d/zulip]/ensure: created
Notice: /Stage[main]/Zulip::Supervisor/File[/etc/supervisor/supervisord.conf]/content: content changed '{md5}99dc7e8a1178ede9ae9794aaecbca436' to '{md5}7ef9771d2c476c246a3ebd95fab784cb'
Notice: /Stage[main]/Zulip::Supervisor/Exec[supervisor-restart]: Triggered 'refresh' from 1 event
[...]
Notice: /Stage[main]/Zulip::App_frontend_base/File[/etc/supervisor/conf.d/zulip/zulip.conf]/ensure: defined content as '{md5}d98ac8a974d44efb1d1bb2ef8b9c3dee'
[...]
Notice: /Stage[main]/Zulip::App_frontend_once/File[/etc/supervisor/conf.d/zulip/zulip-once.conf]/ensure: defined content as '{md5}53f56ae4b95413bfd7a117e3113082dc'
[...]
Notice: /Stage[main]/Zulip::Process_fts_updates/File[/etc/supervisor/conf.d/zulip/zulip_db.conf]/ensure: defined content as '{md5}96092d7f27d76f48178a53b51f80b0f0'
Notice: /Stage[main]/Zulip::Supervisor/Service[supervisor]/ensure: ensure changed 'stopped' to 'running'
```
The last line is misleading -- supervisor was already started by the
`supervisor-restart` process on the third line. As can be shown with
`zulip-puppet-apply --debug`, the last line just installs supervisor
to run on startup, using `systemctl`:
```
Debug: Executing: 'supervisorctl status'
Debug: Executing: '/usr/bin/systemctl unmask supervisor'
Debug: Executing: '/usr/bin/systemctl start supervisor'
```
This means the list of processes started by supervisor depends
entirely on which configuration files were successfully written out by
puppet before the initial `supervisor-restart` ran. Since
`zulip_db.conf` is written later than the rest, the initial install
often fails to start the `process-fts-updates` process. In this
state, an explicit `supervisorctl restart` or `supervisorctl reread &&
supervisorctl update` is required for the service to be found and
started.
Reorder the `supervisor-restart` exec to only run after the service is
started. Because all supervisor configuration files have a `notify`
of the service, this forces the ordering of:
```
(package) -> (config files) -> (service) -> (optional restart)
```
On first startup, this will start and them immediately restart
supervisor, which is unfortunate but unavoidable -- and not terribly
relevant, since the database will not have been created yet, and thus
most processes will be in a restart loop for failing to connect to it.
The sysvinit script for supervisor has a long-standing bug where
`/etc/init.d/supervisor restart` stops but does not then start the
supervisor process.
Work around this by making restart then try to start, and return if it
is currently running.
Not having the package installed will cause startup failures in
`process_fts_updates`; ensure that we've installed the package before
we potentially start the service.
93f62b999e removed the last file in
puppet/zulip/files/nagios_plugins/zulip_nagios_server, which means the
singular rule in zulip::nagios no longer applies cleanly.
Remove the `zulip::nagios` class, as it is no longer needed.
An organization with at most 5 users that is behind on payments isn't
worth spending time on investigating the situation.
For larger organizations, we likely want somewhat different logic that
at least does not void invoices.
Staging and other hosts that are `zulip::app_frontend_base` but not
`zulip::app_frontend_once` do not have a
/etc/supervisor/conf.d/zulip/zulip-once.conf and as such do not have
`zulip_deliver_scheduled_emails` or `zulip_deliver_scheduled_messages`
and thus supervisor will fail to reload.
Making the contents of `zulip-workers` contingent on if the server is
_also_ a `-once` server is complicated, and would involve using Concat
fragments, which severely limit readability.
Instead, expel those two from `zulip-workers`; this is somewhat
reasonable, since they are use an entirely different codepath from
zulip_events_*, using the database rather than RabbitMQ for their
queuing.
This is similar cleanup to 3ab9b31d2f, but only affects zulip_ops
services; it serves to ensure that any of these services which are no
longer enabled are automatically removed from supervisor.
Note that this will cause a supervisor restart on all affected hosts,
which will restart all supervisor services.
Failure to do this results in:
```
psql: error: failed to connect to `host=localhost user=zulip database=zulip`: failed to write startup message (x509: certificate is valid for [redacted], not localhost)
```
Host-based md5 auth for 127.0.0.1 must be removed from `pg_hba.conf`,
otherwise password authentication is preferred over certificate-based
authentication for localhost.
Nagios refuses to allow any modifications with use_authentication off;
re-enabled "authentication" but set a default user, which (by way of
the `*` permissions in 359f37389a) is allowed to take all actions.
This requires switching to a reverse tunnel for the auth connection,
with the side effect that the `zulip_ops::teleport::node` manifest can
be applied on servers anywhere in the Internet; they do not need to
have any publicly-available open ports.
This means that services will only open their ports if they are
actually run, without having to clutter rules.v4 with a log of `if`
statements.
This does not go as far as using `puppetlabs/firewall`[1] because that
would represent an additional DSL to learn; raw IPtables sections can
easily be inserted into the generated iptables file via
`concat::fragment` (either inline, or as a separate file), but config
can be centralized next to the appropriate service.
[1] https://forge.puppet.com/modules/puppetlabs/firewall
Using puppet modules from the puppet forge judiciously will allow us
to simplify the configuration somewhat; this specifically pulls in the
stdlib module, which we were already using parts of.
This moves the `.asc` files into subdirectories, and writes out the
according `.list` files into them. It moves from templates to
written-out `.list` files for clarity and ease of
implementation (Debian and Ubuntu need different templates for
`zulip`), and as a way of making explicit which releases are supported
for each list. For the special-case of the PGroonga signing key, we
source an additional file within the directory.
This simplifies the process for adding another class of `.list` file.
Rather than duplicate logic from `computed_settings`, use the values
that were computed therein.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
Using the second branch _only_ for case (3), of a PostgreSQL server on
a different host, leaves it untested in CI. It also brings in an
unnecessary Django dependency.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
We only need to read the `zulip.conf` file to determine if we're using
PGROONGA if we are on the PostgreSQL machine, with no access to
Django.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
The only way in which "host" could be set is in cases (1) or (2), when
it was potentially read from Django's settings. In case (3), we
already know we are on the same host as the PostgreSQL server.
This unifies the two separated checks, which are actually the same
check.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
`deliver_scheduled_emails` and `deliver_scheduled_messages` use the
`ScheduledEmail` and `ScheduledMessage` tables as a queue,
effectively, pulling values off of them. As noted in their comments,
this is not safe to run on multiple hosts at once. As such, split out
the supervisor files for them.
These thresholds are in relationship to the
`autovacuum_freeze_max_age`, *not* the XID wraparound, which happens
at 2^31-1. As such, it is *perfectly normal* that they hit 100%, and
then autovacuum kicks in and brings it back down. The unusual
condition is that PostgreSQL pushes past the point where an autovacuum
would be triggered -- therein lies the XID wraparound danger.
With the `autovacuum_freeze_max_age` set to 2000000000 in
`postgresql.conf`, XID wraparound happens at 107.3%. Set the warning
and error thresholds to below this, but above 100% so this does not
trigger constantly.
This makes it parallel with deliver_scheduled_messages, and clarifies
that it is not used for simply sending outgoing emails (e.g. the
`email_senders` queue).
This also renames the supervisor job to match.
Matching the full process name (-x without -f) or full command
line (-xf) is less prone to mistakes like matching a random substring
of some other command line or pgrep matching itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `en_US.UTF-8` locale may not be configured or generated on all
installs; it also requires that the `locales` package be installed.
If users generate the `en_US.UTF-8` locale without adding it to the
permanent set of system locales, the generated `en_US.UTF-8` stops
working when the `locales` package is updated.
Switch to using `C.UTF-8` in all cases, which is guaranteed to be
installed.
Fixes#15819.
In puppet, we use pgrep in the collection stage, to see if rabbitmq is
running. Sufficiently bare-bones systems will not have
`procps` (which provides `pgrep`) installed yet, which makes the
install abort when running `puppet` for the first time.
Just installing the `procps` package in Puppet is insufficient,
because the check in the `unless` block runs when Puppet is
determining which resources it needs to instantiate, and in what
order; any package installation has yet to happen. As
`erlang-base` (which provides `epmd`) happens to have a dependency of
`procps`, any system without `pgrep` will also not have `epmd`
installed or running. Regardless, it is safe to run `epmd -daemon`
even if one is already running, as the comment above notes.
Using `pgrep -f epmd` to determine if `empd` is running is a race
condition with itself, since the pgrep is attempting to match the
"full process name" and its own full process name contains "epmd".
This leads to epmd not being started when it should be, which in turn
leads to rabbitmq-server failing to start.
Use the standard trick for this, namely a one-character character
class, to prevent self-matching.
We use the snakeoil TLS certificate for PostgreSQL and Postfix; some
VMs install the `ssl-cert` package but (reasonably) don't build the
snakeoil certs into the image.
Build them as needed.
Fixes#14955.
`uploads-route.noserve` and `uploads-route.internal` contained
identical location blocks for `/upload`, since differentiation was
necessary for Trusty until 33c941407b72; move the now-common sections
into `app`.
This the only differences between internal and S3 serving as a single
block which should be included or not based on config; move it to a
file which may or may not be placed in `app.d/`.
07779ea879 added an additional `proxy_set_header` of `X-Real-IP` to
`puppet/zulip/files/nginx/zulip-include-common/proxy`; as noted in
that commit, Tornado longpoll proxies already included such a line.
Unfortunately, this equates to setting that header _twice_ for Tornado
ports, like so:
```
X-Real-Ip: 198.199.116.58
X-Real-Ip: 198.199.116.58
```
...which is represented, once parsed by Django, as an IP of
`198.199.116.58, 198.199.116.58`. For IPv4, this odd "IP address" has
no problems, and appears in the access logs accordingly; for IPv6
addresses, however, its length is such that it overflows a call to
`getaddrinfo` when attempting to determine the validity of the IP.
Remove the now-duplicated inclusion of the header.
The `X-Forwarded-For` header is a list of proxies' IP addresses; each
proxy appends the remote address of the host it received its request
from to the list, as it passes the request down. A naïve parsing, as
SetRemoteAddrFromForwardedFor did, would thus interpret the first
address in the list as the client's IP.
However, clients can pass in arbitrary `X-Forwarded-For` headers,
which would allow them to spoof their IP address. `nginx`'s behavior
is to treat the addresses as untrusted unless they match an allowlist
of known proxies. By setting `real_ip_recursive on`, it also allows
this behavior to be applied repeatedly, moving from right to left down
the `X-Forwarded-For` list, stopping at the right-most that is
untrusted.
Rather than re-implement this logic in Django, pass the first
untrusted value that `nginx` computer down into Django via `X-Real-Ip`
header. This allows consistent IP addresses in logs between `nginx`
and Django.
Proxied calls into Tornado (which don't use UWSGI) already passed this
header, as Tornado logging respects it.
This verifies that the proxy is working by accessing a
highly-available website through it. Since failure of this equates to
failures of Sentry notifications and Android mobile push
notifications, this is a paging service.
All of `/var/log/nginx/` is chown'd to `zulip` and the nginx processes
themselves run as `nginx`, and would thus (on their own) create new
logfiles as `zulip`. Having `logrotate` create them as the package
default of `www-data` means that they are momentarily unreadable by
the `zulip` user just after rotation, which can cause problems with
logtail scripts.
Commit the standard `nginx` logrotate configuration, but with the
`zulip` user instead of the `www-data` user.
0663b23d54 changed zulip-puppet-apply to
use the venv, because it began using `yaml` to parse the output of
puppet to determine if changes would happen.
However, not every install ends with a venv; notably, non-frontend
servers do not have one. Attempting to run zulip-puppet-apply on them
hence now fails.
Remove this dependency on the venv, by installing a system
python3-yaml package -- though in reality, this package is already an
indirect dependency of the system. Especially since pyyaml is quite
stable, we're not using it in any interesting way, and it does not
actually add to the dependencies, it is preferable to parsing the YAML
by hand in this instance.
This reverts commit 211232978f. The
`rabbitmq` user does not exist yet on first install, and the goal is
to create the `rabbitmq-env.conf` file before the package is
installed.
In production, the `wildcard-zulipchat.com.combined-chain.crt` file is
just a symlink to the snakeoil certificates; but we do not puppet that
symlink, which makes new hosts fail to start cleanly. Instead, point
explicitly to the snakeoil certificate, and explain why.
Directives in `location` blocks may or may not inherit from
surrounding `location` blocks; specifically, `add_header` directives
do not[1]:
> There could be several add_header directives. These directives are
> inherited from the previous configuration level if and only if there
> are no add_header directives defined on the current level.
In order to maintain the same headers (including, critically,
`Access-Control-Allow-Origin`) as the surrounding block, all
`add_header` directives must thus be repeated (which includes the
`include`).
For clarity, un-nest and repeat the entire `location` block as was
used for `/static/`, but with the additional `add_header`. This is
preferred to the of an `if $request_uri` statement to add the header,
as those can have unexpected or undefined results[2].
[1] http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header
[2] https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
Redis is not nagios, and this only leads to confusion as to why there
is a nagios domain setting on frontend servers; it also leaves the
`redis0` part of the name buried in the template.
Switch to an explicit variable for the redis hostname.
This is more broadly useful than for just Kandra; provide
documentation and means to install Smokescreen for stand-alone
servers, and motivate its use somewhat more.
This means that in steady-state, `zulip-puppet-apply` is expected to
produce no changes or commands to execute. The verification step of
`setup-apt-repo` is quite fast, so this cleans up the output for very
little cost.
These optimizations only makes sense when all connections at a TCP
level are coming from the same host or set of hosts; as such, they
are only enabled if `loadbalancer.ips` is set in the `zulip.conf`.
This is required for unattended upgrades to actually run regularly.
In some distributions, it may be found in 20auto-upgrades, but placing
it here makes it more discoverable.
We haven't actively used this plugin in years, and so it was never
converted from the 2014-era monitoring to detect the hostname.
This seems worth fixing since we may want to migrate this logic to a
more modern monitoring system, and it's helpful to have it correct.
79931051bd allows outgoing emails from
localhost, but outgoing recipients are still subjected to virtualmaps.
This caused all outgoing email from Zulip with destination addresses
containing `.`, `+`, or starting with `mm`, to be redirected back
through the email gateway.
Bracket the virualmap addresses used for local delivery to the mail
gateway with a restriction on the domain matching the
`postfix.mailname` configuration, regex-escaped, so those only apply
to email destined for that domain.
The hostname is _not_ moved from `mydestination` to
`virtual_alias_domains`, as that would preclude delivery to
actually-local addresses, like `postmaster@`.
We run this tool at DEBUG log level in production, so we will still
see the notice on startup there; this avoids a spammy line in the
development environment output..
`wal-g wal-push` has a known bug with occasionally hanging after file
upload to S3[1]; set a rather long timeout on the upload process, so
that we don't simply stall forever when archiving WAL segments.
[1] https://github.com/wal-g/wal-g/issues/656
Logging `Host` is useful for determining access patterns to realms,
especially if ROOT_DOMAIN_LANDING_PAGE is set. Total response time is
useful in debugging access and performance patterns.
These are respected by `urllib`, and thus also `requests`. We set
`HTTP_proxy`, not `HTTP_PROXY`, because the latter is ignored in
situations which might be running under CGI -- in such cases it may be
coming from the `Proxy:` header in the request.
This provides a single reference point for all zulip.conf settings;
these mostly link out to the more complete documentation about each
setting, elsewhere.
Fixes#12490.