572443edc6 removed the callsite that triggered the exec in
`zulip::systemd_daemon_reload`, making its inclusion and ordering via
`require` moot.
Remove the call.
This should give some more room for systems that are still below 4GB
of RAM to use the lower-memory multithreaded mode, which is less
likely to have OOM kills (a very bad experience).
There should be little cost, as few systems are likely allocated with
memory in this range.
The only relevant changes are `PasswordAuthentication no` (which
is now the default) and `MaxStartups 40:50:60` (which is now
unneccesary due to autossh tunnels.
This was changed midway through the implementation, from reading it
from `zulip-secrets.conf`, and a couple locations still reference the
secrets path.
Under heavy request load, it is possible for the conntrack kernel
table to fill up (by default, 256k connections). This leads to DNS
requests failing because they cannot make a new conntrack entry.
Allow all port-53 UDP traffic in and out without connection tracking.
This means that unbound port-53 traffic is no longer filtered out by
the on-host firewall -- but it is already filtered out at the border
firewall, so this does not change the external network posture.
`systemd-resolve` also only binds to 127.0.0.53 on the loopback
interface, so there is no server to attack on inbound port 53.
fcf096c52e removed the callsite which would have notified this
contact. Note that the source config file was presumably installed via the
python-zulip-api package.
149bea8309 added a separate config file
for smokescreen (which is necessary because it can be installed
separately) but failed ot notice that `zulip.template.erb` already had
a config line for it. This leads to failures starting the logrotate
service:
```
logrotate[4158688]: error: zulip:1 duplicate log entry for /var/log/zulip/smokescreen.log
logrotate[4158688]: error: found error in file zulip, skipping
```
Remove the duplicate line.
While the Tornado server supports POST requests, those are only used
by internal endpoints. We only support OPTIONS, GET, and DELETE
methods from clients, so filter everything else out at the nginx
level.
We set `Accepts` header on both `OPTIONS` requests and 405 responses,
and the CORS headers on `OPTIONS` requests.
Limiting only by client_name and query leads to a very poorly-indexed
lookup on `query` which throws out nearly all of its rows:
```
Nested Loop (cost=50885.64..60522.96 rows=821 width=8)
-> Index Scan using zerver_client_name_key on zerver_client (cost=0.28..2.49 rows=1 width=4)
Index Cond: ((name)::text = 'zephyr_mirror'::text)
-> Bitmap Heap Scan on zerver_useractivity (cost=50885.37..60429.95 rows=9052 width=12)
Recheck Cond: ((client_id = zerver_client.id) AND ((query)::text = ANY ('{get_events,/api/v1/events}'::text[])))
-> BitmapAnd (cost=50885.37..50885.37 rows=9052 width=0)
-> Bitmap Index Scan on zerver_useractivity_2bfe9d72 (cost=0.00..16631.82 rows=..large.. width=0)
Index Cond: (client_id = zerver_client.id)
-> Bitmap Index Scan on zerver_useractivity_1b1cc7f0 (cost=0.00..34103.95 rows=..large.. width=0)
Index Cond: ((query)::text = ANY ('{get_events,/api/v1/events}'::text[]))
```
A partial index on the client and query list is extremely effective
here in reducing PostgreSQL's workload; however, we cannot easily
write it as a migration, since it depends on the value of the ID of
the `zephyr_mirror` client.
Since this is only relevant for Zulip Cloud, we manually create the
index:
```sql
CREATE INDEX CONCURRENTLY zerver_useractivity_zehpyr_liveness
ON zerver_useractivity(last_visit)
WHERE client_id = 1005
AND query IN ('get_events', '/api/v1/events');
```
We rewrite the query to do the time limit, distinct, and count in SQL,
instead of Python, and make use of this index. This turns a 20-second
query into two 10ms queries.
The Ubuntu and Debian package installation scripts for
`rabbitmq-server` install `/etc/rabbitmq` (and its contents) owned by
the `rabbitmq` user -- not `root` as Puppet does. This means that
Puppet and `rabbitmq-server` unnecessarily fight over the ownership.
Create the `rabbitmq` user and group, to the same specifications that
the Debian package install scripts do, so that we can properly declare
the ownership of `/etc/rabbitmq`.
Without this, browser refused to play the video. To reproduce press `open`
on an uploaded video on CZO. Chrome gives us the following error
in console:
Refused to load media from '<source>' because it violates the
following Content Security Policy directive: "default-src 'none'".
Note that 'media-src' was not explicitly set, so 'default-src' is
used as a fallback.
The `unless` step errors out if /usr/bin/psql does not exist at
first evaluation time -- protect that with a `test -f` check, and
protect the actual `createuser` with a dependency on `postgresql-client`.
To work around `Zulip::Safepackage` not actually being safe to
instantiate more than once, we move the instantiation of
`Package[postgresql-client]` into a class which can be safely
included one or more times.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The rolling restart configuration of uwsgi attempted to re-chdir the
CWD to the new `/home/zulip/deployments/current` before `lazy-apps`
loaded the application in the forked child. It successfully did so --
however, the "main" process was still running in the original
`/home/zulip/deployments/current`, which somehow (?) tainted the
search path of the children processes.
Set the parent uwsgi process to start in `/`, so that the old deploy
directory cannot taint the load order of later children processes.
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
All `X-amz-*` headers must be included in the signed request to S3;
since Django did not take those headers into account (it constructed a
request from scratch, while nginx's request inherits them from the
end-user's request), the proxied request fails to be signed correctly.
Strip off the `X-amz-cf-id` header added by CloudFront. While we
would ideally strip off all `X-amz-*` headers, this requires a
third-party module[^1].
[^1]: https://github.com/openresty/headers-more-nginx-module#more_clear_input_headers
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `time` field is based on the file metadata in S3, which means that
touching the file contents in S3 can move backups around in the list.
Switch to using `start_time` as the sort key, which is based on the
contents of the JSON file stored as part of the backup, so is not
affected by changes in S3 metadata.
supervisord's log rotation is only "every x bytes" which is not a good
enough policy for tracking auditing logs. The default is also 10 logs
of 50MB, which is very much not enough for active instances.
Switch to tracking 14 days of daily logs.
Combine nginx and Django middlware to stop putting misleading warnings
about `CSRF_TRUSTED_ORIGINS` when the issue is untrusted proxies.
This attempts to, in the error logs, diagnose and suggest next steps
to fix common proxy misconfigurations.
See also #24599 and zulip/docker-zulip#403.
Updating the pgroonga package is not sufficient to upgrade the
extension in PostgreSQL -- an `ALTER EXTENSION pgroonga UPDATE` must
explicitly be run[^1]. Failure to do so can lead to unexpected behavior,
including crashes of PostgreSQL.
Expand on the existing `pgroonga_setup.sql.applied` file, to track
which version of the PostgreSQL extension has been configured. If the
file exists but is empty, we run `ALTER EXTENSION pgroonga UPDATE`
regardless -- if it is a no-op, it still succeeds with a `NOTICE`:
```
zulip=# ALTER EXTENSION pgroonga UPDATE;
NOTICE: version "3.0.8" of extension "pgroonga" is already installed
ALTER EXTENSION
```
The simple `ALTER EXTENSION` is sufficient for the
backwards-compatible case[^1] -- which, for our usage, is every
upgrade since 0.9 -> 1.0. Since version 1.0 was released in 2015,
before pgroonga support was added to Zulip in 2016, we can assume for
the moment that all pgroonga upgrades are backwards-compatible, and
not bother regenerating indexes.
Fixes: #25989.
[^1]: https://pgroonga.github.io/upgrade/
This was only necessary for PGroonga 1.x, and the `pgroonga` schema
will most likely be removed at some point inthe future, which will
make this statement error out.
Drop the unnecessary statement.
The syntax in `/etc/resolv.conf` does not include any brackets:
```
nameserver 2001:db8::a3
```
However, the format of the nginx `resolver` directive[^1] requires that
IPv6 addresses be enclosed in brackets.
Adjust the `resolver_ip` puppet function to surround any IPv6
addresses extracted from `/etc/resolv.conf` with square brackets, and
any addresses from `application_server.resolver` to gain brackets if
necessary.
Fixes: #26013.
[^1]: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
04cf68b45e make nginx responsible for downloading (and caching)
files from S3. As noted in that commit, nginx implements its own
non-blocking DNS resolver, since the base syscall is blocking, so
requires an explicit nameserver configuration. That commit used
127.0.0.53, which is provided by systemd-resolved, as the resolver.
However, that service may not always be enabled and running, and may
in fact not even be installed (e.g. on Docker). Switch to parsing
`/etc/resolv.conf` and using the first-provided nameserver. In many
deployments, this will still be `127.0.0.53`, but for others it will
provide a working DNS server which is external to the host.
In the event that a server is misconfigured and has no resolvers in
`/etc/resolv.conf`, it will error out:
```console
Error: Evaluation Error: Error while evaluating a Function Call, No nameservers found in /etc/resolv.conf! Configure one by setting application_server.nameserver in /etc/zulip/zulip.conf (file: /home/zulip/deployments/current/puppet/zulip/manifests/app_frontend_base.pp, line: 76, column: 70) on node example.zulipdev.org
```
Django has a `SECURE_PROXY_SSL_HEADER` setting[^1] which controls if
it examines a header, usually provided by upstream proxies, to allow
it to treat requests as "secure" even if the proximal HTTP connection
was not encrypted. This header is usually the `X-Forwarded-Proto`
header, and the Django configuration has large warnings about ensuring
that this setting is not enabled unless `X-Forwarded-Proto` is
explicitly controlled by the proxy, and cannot be supplied by the
end-user.
In the absence of this setting, Django checks the `wsgi.url_scheme`
property of the WSGI environment[^2].
Zulip did not control the value of the `X-Forwarded-Proto` header,
because it did not set the `SECURE_PROXY_SSL_HEADER` setting (though
see below). However, uwsgi has undocumented code which silently
overrides the `wsgi.url_scheme` property based on the
`HTTP_X_FORWARDED_PROTO` property[^3] (and hence the
`X-Forwarded-Proto` header), thus doing the same as enabling the
Django `SECURE_PROXY_SSL_HEADER` setting, but in a way that cannot be
disabled. It also sets `wsgi.url_scheme` to `https` if the
`X-Forwarded-SSL` header is set to `on` or `1`[^4], providing an
alternate route to deceive to Django.
These combine to make Zulip always trust `X-Forwarded-Proto` or
``X-Forwarded-SSL` headers from external sources, and thus able to
trick Django into thinking a request is "secure" when it is not.
However, Zulip is not accessible via unencrypted channels, since it
redirects all `http` requests to `https` at the nginx level; this
mitigates the vulnerability.
Regardless, we harden Zulip against this vulnerability provided by the
undocumented uwsgi feature, by stripping off `X-Forwarded-SSL` headers
before they reach uwsgi, and setting `X-Forwarded-Proto` only if the
request was received directly from a trusted proxy.
Tornado, because it does not use uwsgi, is an entirely separate
codepath. It uses the `proxy_set_header` values from
`puppet/zulip/files/nginx/zulip-include-common/proxy`, which set
`X-Forwarded-Proto` to the scheme that nginx received the request
over. As such, `SECURE_PROXY_SSL_HEADER` was set in Tornado, and only
Tornado; since the header was always set in nginx, this was safe.
However, it was also _incorrect_ in cases where nginx did not do SSL
termination, but an upstream proxy did -- it would mark those requests
as insecure when they were actually secure. We adjust the
`proxy_set_header X-Forwarded-Proto` used to talk to Tornado to
respect the proxy if it is trusted, or the local scheme if not.
[^1]: https://docs.djangoproject.com/en/4.2/ref/settings/#secure-proxy-ssl-header
[^2]: https://wsgi.readthedocs.io/en/latest/definitions.html#envvar-wsgi.url_scheme
[^3]: 73efb013e9/core/protocol.c (L558-L561)
[^4]: 73efb013e9/core/protocol.c (L531-L534)
1c76036c61 raised the number of `minfds` in Supervisor from 40k to
1M. If Supervisor cannot guarantee that number of available file
descriptors, it will fail to start; `/etc/security/limits.conf` was
hence adjusted upwards as well. However, on some virtualized
environments, including Proxmox LXC, setting
`/etc/security/limits.conf` may not be enough to raise the
system-level limits. This causes `supervisord` with the larger
`minfds` to fail to start.
The limit of 1000000 was chosen to be arbitrarily high, assuming it
came without cost; it is not expected to ever be reached on any
deployment. 262b19346e already lowered one aspect of that
changeset, upon determining it did come with a cost. Potentially
breaking virtualized deployments during upgrade is another cost of
that change.
Lower the `minfds` it back down to 40k, partially reverting
1c76036c61, but allow adjusting it upwards for extremely large
deployments. We do not expect any except the largest deployments to
ever hit the 40k limit, and a frictionless deployment for the
vanishingly small number of huge deployments is not worth the
potential upgrade hiccups for the much more frequent smaller
deployments.
When upgraded, the `erlang-base` package automatically stops all
services which depend on the Erlang runtime; for Zulip, this is the
`rabbitmq-server` service. This results in an unexpected outage of
Zulip.
Block unattended upgrades of the `erlang-base` package.
a522ad1d9a mistakenly deleted this variable assignment, which made
the `zulip.conf` configuration setting not work -- uwsgi's `lazy_apps`
were not enabled, which are required for rolling restart.
Instead of copying over a mostly-unchanged `postgresql.conf`, we
transition to deploying a `conf.d/zulip.conf` which contains the
only material changes we made to the file, which were previously
appended to the end.
While shipping separate while `postgresql.conf` files for each
supported version is useful if there is large variety in supported
options between versions, there is not no such variation at current,
and the burden of overriding the entire default configuration is that
it must be keep up to date wit the package's version.
Otherwise, this output goes into `/var/spool/mail/postgres`, which is
not terribly helpful. We do not write to `/var/log/zulip` because the
backup runs as the `postgres` user, and `/var/log/zulip` is owned by
zulip and chmod 750.
Since backups may now taken on arbitrary hosts, we need a blackbox
monitor that _some_ backup was produced.
Add a Prometheus exporter which calls `wal-g backup-list` and reports
statistics about the backups.
This could be extended to include `wal-g wal-verify`, but that
requires a connection to the PostgreSQL server.
Taking backups on the database primary adds additional disk load,
which can impact the performance of the application.
Switch to taking backups on replicas, if they exist. Some deployments
may have multiple replicas, and taking backups on all of them is
wasteful and potentially confusing; add a flag to inhibit taking
nightly snapshots on the host.
If the deployment is a single instance of PostgreSQL, with no
replicas, it takes backups as before, modulo the extra flag to allow
skipping taking them.
7c023042cf moved the logrotate configuration to being a templated
file, from a static file, but missed that the static file was still
referenced from `zulip_ops::app_frontend`; it only updated
`zulip::profile::app_frontend`. This caused errors in applying puppet
on any `zulip_ops::app_frontend` host.
Prior to 7c023042cf, the Puppet role was identical between those two
classes; deduplicate the rule by moving the updated template
definition into `zulip::app_frontend_base` which is common to those
two classes and not used in any other classes.
Since logrotate runs in a daily cron, this practically means "daily,
but only if it's larger than 500M." For large installs with large
traffic, this is effectively daily for 10 days; for small installs, it
is an unknown amount of time.
Switch to daily logfiles, defaulting to 14 days to match nginx; this
can be overridden using a zulip.conf setting. This makes it easier to
ensure that access logs are only kept for a bounded period of time.
Following zulip/python-zulip-api/pull/758/, we're no longer using
python-zephyr, and don't need to build it from source. Additionally,
we no longer need to build a forked Zephyr package, since ZLoadSession
and ZDumpSession were merged in
e6a545e759.
To not change the `supervisor.conf` file, which requires a restart of
supervisor (and thus all services running under it, which is extremely
disruptive) we carefully leave the contents unchanged for most
installs, and append a new piece to the file, only for the zmirror
configuration, using `concat`.
We see connection timeouts and other access issues when run exactly on
the hour, either due to load on their servers from similar cron jobs,
or from operational processes of theirs.
Move to on the :17s to avoid these access issues.
Increasing worker_connections has a memory cost, unlike the rest of
the changes in 1c76036c61d8; setting it to 1 million caused nginx to
consume several GB of memory.
Reduce the default down to 10k, and allow deploys to configure it up
if necessary. `worker_rlimit_nofile` is left at 1M, since it has no
impact on memory consumption.
There is no reason that the base node access method should be run
under supervisor, which exists primarily to give access to the `zulip`
user to restart its managed services. This access is unnecessary for
Teleport, and also causes unwanted restarts of Teleport services when
the `supervisor` base configuration changes. Additionally,
supervisor does not support the in-place upgrade process that Teleport
uses, as it replaces its core process with a new one.
Switch to installing a systemd configuration file (as generated by
`teleport install systemd`) for each part of Teleport, customized to
pass a `--config` path. As such, we explicitly disable the `teleport`
service provided by the package.
The supervisor process is shut down by dint of no longer installing
the file, which purges it from the managed directory, and reloads
Supervisor to pick up the removed service.
Zulip already has integrations for server-side Sentry integration;
however, it has historically used the Zulip-specific `blueslip`
library for monitoring browser-side errors. However, the latter sends
errors to email, as well optionally to an internal `#errors` stream.
While this is sufficient for low volumes of users, and useful in that
it does not rely on outside services, at higher volumes it is very
difficult to do any analysis or filtering of the errors. Client-side
errors are exceptionally noisy, with many false positives due to
browser extensions or similar, so determining real real errors from a
stream of un-grouped emails or messages in a stream is quite
difficult.
Add a client-side Javascript sentry integration. To provide useful
backtraces, this requires extending the pre-deploy hooks to upload the
source-maps to Sentry. Additional keys are added to the non-public
API of `page_params` to control the DSN, realm identifier, and sample
rates.
The current threshold of 40k descriptors was set in 2016, chosen to be
"at least 40x our current scale." At present, that only provides a
50% safety margin. Increase to 1 million to provide the same 40x
buffer as previously.
The highest value currently allowed by the kernels in
production (linux 5.3.0) is 1048576. This is set as the hard limit.
The 1 million limit is likely far above what the system can handle for
other reasons (memory, cpu, etc). While this removes a potential
safeguard on overload due to too many connections, due to the longpoll
architecture we would generally prefer to service more connections at
lower quality (due to CPU limitations) rather than randomly reject
additional connections.
Relevant prior commits:
- 836f313e69
- f2f97dd335
- ec23996538
- 8806ec698a
- e4fce10f46
After reflecting a bit on the last commit, I think it's substantially
easier to understand what's happening for these two tasks to be
defined in the same file, because we want the timing to be different
to avoid potential races.