This only defaults to on for local-disk backups, since they are more
disk-size-sensitive, and local accesses are quite cheap compared to
loading multiple incremental backups from S3.
(cherry picked from commit 323e1e92b7)
This should give some more room for systems that are still below 4GB
of RAM to use the lower-memory multithreaded mode, which is less
likely to have OOM kills (a very bad experience).
There should be little cost, as few systems are likely allocated with
memory in this range.
(cherry picked from commit a22f418827)
The defaults for how many uwsgi processes to run no longer depend on
the queue processor mode, but instead the total memory on the system.
(cherry picked from commit 62dbe2298e)
149bea8309 added a separate config file
for smokescreen (which is necessary because it can be installed
separately) but failed ot notice that `zulip.template.erb` already had
a config line for it. This leads to failures starting the logrotate
service:
```
logrotate[4158688]: error: zulip:1 duplicate log entry for /var/log/zulip/smokescreen.log
logrotate[4158688]: error: found error in file zulip, skipping
```
Remove the duplicate line.
(cherry picked from commit 725affcb5a)
While the Tornado server supports POST requests, those are only used
by internal endpoints. We only support OPTIONS, GET, and DELETE
methods from clients, so filter everything else out at the nginx
level.
We set `Accepts` header on both `OPTIONS` requests and 405 responses,
and the CORS headers on `OPTIONS` requests.
Limiting only by client_name and query leads to a very poorly-indexed
lookup on `query` which throws out nearly all of its rows:
```
Nested Loop (cost=50885.64..60522.96 rows=821 width=8)
-> Index Scan using zerver_client_name_key on zerver_client (cost=0.28..2.49 rows=1 width=4)
Index Cond: ((name)::text = 'zephyr_mirror'::text)
-> Bitmap Heap Scan on zerver_useractivity (cost=50885.37..60429.95 rows=9052 width=12)
Recheck Cond: ((client_id = zerver_client.id) AND ((query)::text = ANY ('{get_events,/api/v1/events}'::text[])))
-> BitmapAnd (cost=50885.37..50885.37 rows=9052 width=0)
-> Bitmap Index Scan on zerver_useractivity_2bfe9d72 (cost=0.00..16631.82 rows=..large.. width=0)
Index Cond: (client_id = zerver_client.id)
-> Bitmap Index Scan on zerver_useractivity_1b1cc7f0 (cost=0.00..34103.95 rows=..large.. width=0)
Index Cond: ((query)::text = ANY ('{get_events,/api/v1/events}'::text[]))
```
A partial index on the client and query list is extremely effective
here in reducing PostgreSQL's workload; however, we cannot easily
write it as a migration, since it depends on the value of the ID of
the `zephyr_mirror` client.
Since this is only relevant for Zulip Cloud, we manually create the
index:
```sql
CREATE INDEX CONCURRENTLY zerver_useractivity_zehpyr_liveness
ON zerver_useractivity(last_visit)
WHERE client_id = 1005
AND query IN ('get_events', '/api/v1/events');
```
We rewrite the query to do the time limit, distinct, and count in SQL,
instead of Python, and make use of this index. This turns a 20-second
query into two 10ms queries.
The Ubuntu and Debian package installation scripts for
`rabbitmq-server` install `/etc/rabbitmq` (and its contents) owned by
the `rabbitmq` user -- not `root` as Puppet does. This means that
Puppet and `rabbitmq-server` unnecessarily fight over the ownership.
Create the `rabbitmq` user and group, to the same specifications that
the Debian package install scripts do, so that we can properly declare
the ownership of `/etc/rabbitmq`.
Without this, browser refused to play the video. To reproduce press `open`
on an uploaded video on CZO. Chrome gives us the following error
in console:
Refused to load media from '<source>' because it violates the
following Content Security Policy directive: "default-src 'none'".
Note that 'media-src' was not explicitly set, so 'default-src' is
used as a fallback.
The `unless` step errors out if /usr/bin/psql does not exist at
first evaluation time -- protect that with a `test -f` check, and
protect the actual `createuser` with a dependency on `postgresql-client`.
To work around `Zulip::Safepackage` not actually being safe to
instantiate more than once, we move the instantiation of
`Package[postgresql-client]` into a class which can be safely
included one or more times.
This endpoint verifies that the services that Zulip needs to function
are running, and Django can talk to them. It is designed to be used
as a readiness probe[^1] for Zulip, either by Kubernetes, or some other
reverse-proxy load-balancer in front of Zulip. Because of this, it
limits access to only localhost and the IP addresses of configured
reverse proxies.
Tests are limited because we cannot stop running services (which would
impact other concurrent tests) and there would be extremely limited
utility to mocking the very specific methods we're calling to raising
the exceptions that we're looking for.
[^1]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
The rolling restart configuration of uwsgi attempted to re-chdir the
CWD to the new `/home/zulip/deployments/current` before `lazy-apps`
loaded the application in the forked child. It successfully did so --
however, the "main" process was still running in the original
`/home/zulip/deployments/current`, which somehow (?) tainted the
search path of the children processes.
Set the parent uwsgi process to start in `/`, so that the old deploy
directory cannot taint the load order of later children processes.
This is common in cases where the reverse proxy itself is making
health-check requests to the Zulip server; these requests have no
X-Forwarded-* headers, so would normally hit the error case of
"request through the proxy, but no X-Forwarded-Proto header".
Add an additional special-case for when the request's originating IP
address is resolved to be the reverse proxy itself; in these cases,
HTTP requests with no X-Forwarded-Proto are acceptable.
All `X-amz-*` headers must be included in the signed request to S3;
since Django did not take those headers into account (it constructed a
request from scratch, while nginx's request inherits them from the
end-user's request), the proxied request fails to be signed correctly.
Strip off the `X-amz-cf-id` header added by CloudFront. While we
would ideally strip off all `X-amz-*` headers, this requires a
third-party module[^1].
[^1]: https://github.com/openresty/headers-more-nginx-module#more_clear_input_headers
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `time` field is based on the file metadata in S3, which means that
touching the file contents in S3 can move backups around in the list.
Switch to using `start_time` as the sort key, which is based on the
contents of the JSON file stored as part of the backup, so is not
affected by changes in S3 metadata.