Limiting only by client_name and query leads to a very poorly-indexed
lookup on `query` which throws out nearly all of its rows:
```
Nested Loop (cost=50885.64..60522.96 rows=821 width=8)
-> Index Scan using zerver_client_name_key on zerver_client (cost=0.28..2.49 rows=1 width=4)
Index Cond: ((name)::text = 'zephyr_mirror'::text)
-> Bitmap Heap Scan on zerver_useractivity (cost=50885.37..60429.95 rows=9052 width=12)
Recheck Cond: ((client_id = zerver_client.id) AND ((query)::text = ANY ('{get_events,/api/v1/events}'::text[])))
-> BitmapAnd (cost=50885.37..50885.37 rows=9052 width=0)
-> Bitmap Index Scan on zerver_useractivity_2bfe9d72 (cost=0.00..16631.82 rows=..large.. width=0)
Index Cond: (client_id = zerver_client.id)
-> Bitmap Index Scan on zerver_useractivity_1b1cc7f0 (cost=0.00..34103.95 rows=..large.. width=0)
Index Cond: ((query)::text = ANY ('{get_events,/api/v1/events}'::text[]))
```
A partial index on the client and query list is extremely effective
here in reducing PostgreSQL's workload; however, we cannot easily
write it as a migration, since it depends on the value of the ID of
the `zephyr_mirror` client.
Since this is only relevant for Zulip Cloud, we manually create the
index:
```sql
CREATE INDEX CONCURRENTLY zerver_useractivity_zehpyr_liveness
ON zerver_useractivity(last_visit)
WHERE client_id = 1005
AND query IN ('get_events', '/api/v1/events');
```
We rewrite the query to do the time limit, distinct, and count in SQL,
instead of Python, and make use of this index. This turns a 20-second
query into two 10ms queries.
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
When upgraded, the `erlang-base` package automatically stops all
services which depend on the Erlang runtime; for Zulip, this is the
`rabbitmq-server` service. This results in an unexpected outage of
Zulip.
Block unattended upgrades of the `erlang-base` package.
There is no reason that the base node access method should be run
under supervisor, which exists primarily to give access to the `zulip`
user to restart its managed services. This access is unnecessary for
Teleport, and also causes unwanted restarts of Teleport services when
the `supervisor` base configuration changes. Additionally,
supervisor does not support the in-place upgrade process that Teleport
uses, as it replaces its core process with a new one.
Switch to installing a systemd configuration file (as generated by
`teleport install systemd`) for each part of Teleport, customized to
pass a `--config` path. As such, we explicitly disable the `teleport`
service provided by the package.
The supervisor process is shut down by dint of no longer installing
the file, which purges it from the managed directory, and reloads
Supervisor to pick up the removed service.
This was last really used in d7a3570c7e, in 2013, when it was
`/home/humbug/logs`.
Repoint the one obscure piece of tooling that writes there, and remove
the places that created it.
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
‘exit’ is pulled in for the interactive interpreter as a side effect
of the site module; this can be disabled with python -S and shouldn’t
be relied on.
Also, use the NoReturn type where appropriate.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
A number of autossh connections are already left open for
port-forwarding Munin ports; autossh starts the connections and
ensures that they are automatically restarted if they are severed.
However, this represents a missed opportunity. Nagios's monitoring
uses a large number of SSH connections to the remote hosts to run
commands on them; each of these connections requires doing a complete
SSH handshake and authentication, which can have non-trivial network
latency, particularly for hosts which may be located far away, in a
network topology sense (up to 1s for a no-op command!).
Use OpenSSH's ability to multiplex multiple connections over a single
socket, to reuse the already-established connection. We leave an
explicit `ControlMaster no` in the general configuration, and not
`auto`, as we do not wish any of the short-lived Nagios connections to
get promoted to being a control socket if the autossh is not running
for some reason.
We enable protocol-level keepalives, to give a better chance of the
socket being kept open.
https://nginx.org/en/docs/http/ngx_http_map_module.html
Since Puppet doesn’t manage the contents of nginx_sharding.conf after
its initial creation, it needs to be renamed so we can give it
different default contents.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `needrestart` tool added in 22.04 is useful in terms of listing
which services may need to be restarted to pick up updated libraries.
However, it prompts about the current state of services needing
restart for *every* subsequent `apt-get upgrade`, and defaulting core
services to restarting requires carefully manually excluding them
every time, at risk of causing an unscheduled outage.
Build a list of default-off services based on the list in
unattended-upgrades.
This check loads Django, and as such must be run as the zulip user.
Repeat the same pattern used elsewhere in nagios, of writing a state
file, which is read by `check_cron_file`.
Replication checks should only run on primary and replicas, not
standalone hosts; while `autovac_freeze` currently only runs on
primary hosts, it functions identically on replicas, and is fine to
run there.
Make `autovac_freeze` run on all `postgresql` hosts, and make
standalone hosts no longer `postgres_primary`, so they do not fail the
replication tests.
These style of checks just look for matching process names using
`check_remote_arg_string`, which dates to 8edbd64bb8. These were
added because the original two (`missedmessage_emails` and
`slow_queries`) did not create consumers, instead polling for events.
Switch these to checking the queue consumer counts that the
`check-rabbitmq-consumers` check is already writing out. Since the
`missedmessage_emails` was _already_ checked via the consumer check, a
duplicate is not added.
Even the `pageable_servers` group did not page for high load -- in
part because what was "high" depends on the servers. Set slightly
better limits based on server role.
`zmirror` itself was `zmirror_main` + `zmirrorp` but was unused; we
consistently just use the term `zmirror` for the non-personals server,
so use it as the hostgroup name.
The Redis nagios checks themselves are done against `redis` +
`frontends` groups, so there is no need to misleadingly place
`frontends` in the `redis` hostgroup.
5abf4dee92 made this distinction, then multitornado_frontends was
never used; the singletornado_frontends alerting worked even for the
multiple-Tornado instances.
Remove the useless and misleading distinction.
Our current EC2 systems don’t have an interface named ‘eth0’, and if
they did, this script would do nothing but crash with ImportError
because we have never installed boto.utils for Python 3.
(The message of commit 2a4d851a7c made
an effort to document for future researchers why this script should
not have been blindly converted to Python 3. However, commit
2dc6d09c2a (#14278) was evidently
unresearched and untested.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>