This makes no immediate reloads the default for runtornado, matching
the production configuration, and changes the development incantation
to be the one to specify the departure from the norm, with
--immediate-reloads.
Decouple the sending of client restart events from the restarting of
the servers. Restarts use the new Tornado restart-clients endpoint to
inject "restart" events into queues of clients which were loaded from
the previous Tornado process. The rate is controlled by the
`application_server.client_restart_rate`, in clients per minute, or a
flag to `restart-clients` which overrides it. Note that a web client
will also spread its restart over 5 minutes, so artificially-slow
client restarts are generally not very necessary.
Restarts of clients are deferred to until after post-deploy hooks are
run, such that the pre- and post- deploy hooks are around the actual
server restarts, even if pushing restart events to clients takes
significant time.
Note that this uses `ssh-keyscan` to write in the currently-observed
host fingerprint; if DNS or network is untrusted during initial puppet
apply, this can allow attackers to write their own host key, obviating
the utility of known_hosts.
We do not view this as a likely attack mechanism, since in our
deployment the network and DNS is almost certainly trusted, and if
not, the timing attack to catch only initial configuration is likely
impossible.
572443edc6 removed the callsite that triggered the exec in
`zulip::systemd_daemon_reload`, making its inclusion and ordering via
`require` moot.
Remove the call.
This should give some more room for systems that are still below 4GB
of RAM to use the lower-memory multithreaded mode, which is less
likely to have OOM kills (a very bad experience).
There should be little cost, as few systems are likely allocated with
memory in this range.
The only relevant changes are `PasswordAuthentication no` (which
is now the default) and `MaxStartups 40:50:60` (which is now
unneccesary due to autossh tunnels.
This was changed midway through the implementation, from reading it
from `zulip-secrets.conf`, and a couple locations still reference the
secrets path.
Under heavy request load, it is possible for the conntrack kernel
table to fill up (by default, 256k connections). This leads to DNS
requests failing because they cannot make a new conntrack entry.
Allow all port-53 UDP traffic in and out without connection tracking.
This means that unbound port-53 traffic is no longer filtered out by
the on-host firewall -- but it is already filtered out at the border
firewall, so this does not change the external network posture.
`systemd-resolve` also only binds to 127.0.0.53 on the loopback
interface, so there is no server to attack on inbound port 53.
fcf096c52e removed the callsite which would have notified this
contact. Note that the source config file was presumably installed via the
python-zulip-api package.
149bea8309 added a separate config file
for smokescreen (which is necessary because it can be installed
separately) but failed ot notice that `zulip.template.erb` already had
a config line for it. This leads to failures starting the logrotate
service:
```
logrotate[4158688]: error: zulip:1 duplicate log entry for /var/log/zulip/smokescreen.log
logrotate[4158688]: error: found error in file zulip, skipping
```
Remove the duplicate line.
While the Tornado server supports POST requests, those are only used
by internal endpoints. We only support OPTIONS, GET, and DELETE
methods from clients, so filter everything else out at the nginx
level.
We set `Accepts` header on both `OPTIONS` requests and 405 responses,
and the CORS headers on `OPTIONS` requests.
Limiting only by client_name and query leads to a very poorly-indexed
lookup on `query` which throws out nearly all of its rows:
```
Nested Loop (cost=50885.64..60522.96 rows=821 width=8)
-> Index Scan using zerver_client_name_key on zerver_client (cost=0.28..2.49 rows=1 width=4)
Index Cond: ((name)::text = 'zephyr_mirror'::text)
-> Bitmap Heap Scan on zerver_useractivity (cost=50885.37..60429.95 rows=9052 width=12)
Recheck Cond: ((client_id = zerver_client.id) AND ((query)::text = ANY ('{get_events,/api/v1/events}'::text[])))
-> BitmapAnd (cost=50885.37..50885.37 rows=9052 width=0)
-> Bitmap Index Scan on zerver_useractivity_2bfe9d72 (cost=0.00..16631.82 rows=..large.. width=0)
Index Cond: (client_id = zerver_client.id)
-> Bitmap Index Scan on zerver_useractivity_1b1cc7f0 (cost=0.00..34103.95 rows=..large.. width=0)
Index Cond: ((query)::text = ANY ('{get_events,/api/v1/events}'::text[]))
```
A partial index on the client and query list is extremely effective
here in reducing PostgreSQL's workload; however, we cannot easily
write it as a migration, since it depends on the value of the ID of
the `zephyr_mirror` client.
Since this is only relevant for Zulip Cloud, we manually create the
index:
```sql
CREATE INDEX CONCURRENTLY zerver_useractivity_zehpyr_liveness
ON zerver_useractivity(last_visit)
WHERE client_id = 1005
AND query IN ('get_events', '/api/v1/events');
```
We rewrite the query to do the time limit, distinct, and count in SQL,
instead of Python, and make use of this index. This turns a 20-second
query into two 10ms queries.
The Ubuntu and Debian package installation scripts for
`rabbitmq-server` install `/etc/rabbitmq` (and its contents) owned by
the `rabbitmq` user -- not `root` as Puppet does. This means that
Puppet and `rabbitmq-server` unnecessarily fight over the ownership.
Create the `rabbitmq` user and group, to the same specifications that
the Debian package install scripts do, so that we can properly declare
the ownership of `/etc/rabbitmq`.
Without this, browser refused to play the video. To reproduce press `open`
on an uploaded video on CZO. Chrome gives us the following error
in console:
Refused to load media from '<source>' because it violates the
following Content Security Policy directive: "default-src 'none'".
Note that 'media-src' was not explicitly set, so 'default-src' is
used as a fallback.
The `unless` step errors out if /usr/bin/psql does not exist at
first evaluation time -- protect that with a `test -f` check, and
protect the actual `createuser` with a dependency on `postgresql-client`.
To work around `Zulip::Safepackage` not actually being safe to
instantiate more than once, we move the instantiation of
`Package[postgresql-client]` into a class which can be safely
included one or more times.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The rolling restart configuration of uwsgi attempted to re-chdir the
CWD to the new `/home/zulip/deployments/current` before `lazy-apps`
loaded the application in the forked child. It successfully did so --
however, the "main" process was still running in the original
`/home/zulip/deployments/current`, which somehow (?) tainted the
search path of the children processes.
Set the parent uwsgi process to start in `/`, so that the old deploy
directory cannot taint the load order of later children processes.
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
All `X-amz-*` headers must be included in the signed request to S3;
since Django did not take those headers into account (it constructed a
request from scratch, while nginx's request inherits them from the
end-user's request), the proxied request fails to be signed correctly.
Strip off the `X-amz-cf-id` header added by CloudFront. While we
would ideally strip off all `X-amz-*` headers, this requires a
third-party module[^1].
[^1]: https://github.com/openresty/headers-more-nginx-module#more_clear_input_headers