Currently, it handles two hook types: 'pre-create' (to verify that the
user is authenticated and the file size is within the limit) and
'pre-finish' (which creates an attachment row).
No secret is shared between Django and tusd for authentication of the
hooks endpoints, because none is necessary -- tusd forwards the
end-user's credentials, and the hook checks them like it would any
end-user request. An end-user gaining access to the endpoint would be
able to do no more harm than via tusd or the normal file upload API.
Regardless, the previous commit has restricted access to the endpoint
at the nginx layer.
Co-authored-by: Brijmohan Siyag <brijsiyag@gmail.com>
We should not proceed and send client reload events until we know that
all of the server processes have updated to the latest version, or
they may reload into the old server version if they hit a Django
worker which has not yet restarted.
Because the logic controlling the number of workers is mildly complex,
and lives in Puppet, use the `uwsgi` Python bindings to know when the
process being reloaded is the last one, and use that to write out a
file signifying the success of the chain reload. `restart-server`
awaits the creation of this file before proceeding.
This adds `--automated` and `--no-automated` flags to all Zulip
management commands, whose default is based on if STDIN is a TTY.
This enables cron jobs and supervisor commands to continue to report
to Sentry, and manually-run commands (when reporting to Sentry does
not provide value, since the user can see them) to not.
Note that this only applies to Zulip commands -- core Django
commands (e.g. `./manage.py`) do not grow support for `--automated`
and will always report exceptions to Sentry.
`manage.py` subcommands in the `upgrade` and `restart-server` paths
are marked as `--automated`, since those may be run semi-unattended,
and they are useful to log to Sentry.
Replace a separate call to subprocess, starting `node` from scratch,
with an optional standalone node Express service which performs the
rendering. In benchmarking, this reduces the overhead of a KaTeX call
from 120ms to 2.8ms. This is notable because enough calls to KaTeX in
a single message would previously time out the whole message
rendering.
The service is optional because he majority of deployments do not use
enough LaTeX to merit the additional memory usage (60Mb).
Fixes: #17425.
Decouple the sending of client restart events from the restarting of
the servers. Restarts use the new Tornado restart-clients endpoint to
inject "restart" events into queues of clients which were loaded from
the previous Tornado process. The rate is controlled by the
`application_server.client_restart_rate`, in clients per minute, or a
flag to `restart-clients` which overrides it. Note that a web client
will also spread its restart over 5 minutes, so artificially-slow
client restarts are generally not very necessary.
Restarts of clients are deferred to until after post-deploy hooks are
run, such that the pre- and post- deploy hooks are around the actual
server restarts, even if pushing restart events to clients takes
significant time.
This flag was generally used not because we wanted to avoid
restarting Tornado, but because we wanted to avoid increasing load
the server when all of the clients were told to reload.
Since we have laid the groundwork for separately telling Tornado to
tell clients to restart, we remove the --skip-tornado flag; the next
commit will add the ability to skip client restarts.
While the previous commit handles the common case of all of the server
being started already, it still produces ERROR output lines from
supervisorctl when most of the server is already running. Take the
case where one worker is stopped:
```
$ supervisorctl stop zulip-workers:zulip_events_deferred_work
zulip-workers:zulip_events_deferred_work: stopped
$ ./scripts/start-server
2023-04-04 15:50:28,505 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:50:31,977 start-server: Starting Tornado process on port 9800
zulip-tornado:zulip-tornado-port-9800: ERROR (already started)
2023-04-04 15:50:32,283 start-server: Starting Tornado process on port 9801
zulip-tornado:zulip-tornado-port-9801: ERROR (already started)
2023-04-04 15:50:32,592 start-server: Starting django server
zulip-django: ERROR (already started)
2023-04-04 15:50:33,340 start-server: Starting workers
zulip-workers:zulip_events_deferred_work: started
zulip_deliver_scheduled_emails: ERROR (already started)
zulip_deliver_scheduled_messages: ERROR (already started)
process-fts-updates: ERROR (already started)
2023-04-04 15:50:34,659 start-server: Done!
Zulip started successfully!
```
More gracefully handle these cases:
```
$ ./scripts/start-server
2023-04-04 15:52:39,815 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:52:43,270 start-server: Starting Tornado process on port 9800
2023-04-04 15:52:43,287 start-server: zulip-tornado:zulip-tornado-port-9800 already started!
2023-04-04 15:52:43,287 start-server: Starting Tornado process on port 9801
2023-04-04 15:52:43,300 start-server: zulip-tornado:zulip-tornado-port-9801 already started!
2023-04-04 15:52:43,300 start-server: Starting django server
2023-04-04 15:52:43,316 start-server: zulip-django already started!
2023-04-04 15:52:43,793 start-server: Starting workers
zulip-workers:zulip_events_deferred_work: started
2023-04-04 15:52:45,111 start-server: Done!
Zulip started successfully!
```
Currently, the output from `start-server` if the server is already
running is potentially confusing, since it says ERROR several times:
```
$ ./scripts/start-server
2023-04-04 15:35:12,737 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:35:16,211 start-server: Starting Tornado process on port 9800
zulip-tornado:zulip-tornado-port-9800: ERROR (already started)
2023-04-04 15:35:16,528 start-server: Starting Tornado process on port 9801
zulip-tornado:zulip-tornado-port-9801: ERROR (already started)
2023-04-04 15:35:16,844 start-server: Starting django server
zulip-django: ERROR (already started)
2023-04-04 15:35:17,605 start-server: Starting workers
zulip_deliver_scheduled_emails: ERROR (already started)
zulip_deliver_scheduled_messages: ERROR (already started)
process-fts-updates: ERROR (already started)
2023-04-04 15:35:18,923 start-server: Done!
```
Catch the simple common case where all of the services are already
running, and output a clearer success message:
```
$ ./scripts/start-server
2023-04-04 15:39:52,367 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:39:55,857 start-server: Zulip is already started; nothing to do!
```
If there is a syntax error in `settings.py`, `restart-server` should
provide a reasonable message about this. It did so prior to
af08bcdb3f, becausde any invocation `./manage.py` without
`--skip-checks` will verify `settings.py`, among several other checks.
After af08bcdb3f, there are no `./manage.py` calls in most restarts,
which fa77be6e6c took further.
Add an explicit `./manage.py check` in the default case.
upgrade-zulip-stage-2 overrides this by passing `--skip-checks`, for
performance. This also means that `upgrade-zulip-from-git` itself
picks up the same `--skip-checks` flag, since it inherits the same
flag parsing, though that is perhaps of dubious utility.
These are expensive, and moving them to one explicit call early has
considerable time savings in the critical period:
```
$ hyperfine './manage.py fill_memcached_caches' './manage.py fill_memcached_caches --skip-checks'
Benchmark #1: ./manage.py fill_memcached_caches
Time (mean ± σ): 5.264 s ± 0.146 s [User: 4.885 s, System: 0.344 s]
Range (min … max): 5.119 s … 5.569 s 10 runs
Benchmark #2: ./manage.py fill_memcached_caches --skip-checks
Time (mean ± σ): 3.090 s ± 0.089 s [User: 2.853 s, System: 0.214 s]
Range (min … max): 2.950 s … 3.204 s 10 runs
Summary
'./manage.py fill_memcached_caches --skip-checks' ran
1.70 ± 0.07 times faster than './manage.py fill_memcached_caches'
```
Treating the restart as a start is important in reducing the critical
period during upgrades -- we call restart even when we suspect the
services are stopped, because puppet has a small possibility of
placing them in indeterminate state. However, restart orders the
workers first, then tornado/django, which prolongs the outage.
Recognize when no services are currently started, and switch to acting
like a start, not a restart, which places tornado/django first.
In some instances (e.g. during upgrades) we run `restart-server` and
not `start-server`, even though we expect the server to most likely
already be stopped. `supervisorctl restart servicename` if the
service is stopped produces the perhaps-alarming message:
```
restart-server: Restarting servicename
servicename: ERROR (not running)
servicename: started
```
This may cause operators to worry that something is broken, when it is
not.
Check if the service is already running, and switch from "restart" to
"start" in cases where it is not.
The race condition here is safe -- if the service transitions from
stopped to started between the check and the `start` call, it will
merely output:
```
servicename: ERROR (already started)
```
...and continue, as that has exit status 0.
If the service transitions from started to stopped between the check
and the `restart` call, we are merely back in the current case, where
it outputs:
```
servicename: ERROR (not running)
servicename: started
```
In none of these cases does a call to "restart" fail to result in the
service being stopped and then started.
Services like go-camo and smokescreen are not stopped in stop-server,
since they are upgraded and restarted by puppet application. As such,
they also do not appear in start-server, despite the server relying on
them to be running to function properly.
Ensure those services are started, by starting them in start-server,
if they are configured in supervisor on the host.
Restarting the uwsgi processes by way of supervisor opens a window
during which nginx 502's all responses. uwsgi has a configuration
called "chain reloading" which allows for rolling restart of the uwsgi
processes, such that only one process at once in unavailable; see
uwsgi documentation ([1]).
The tradeoff is that this requires that the uwsgi processes load the
libraries after forking, rather than before ("lazy apps"); in theory
this can lead to larger memory footprints, since they are not shared.
In practice, as Django defers much of the loading, this is not as much
of an issue. In a very basic test of memory consumption (measured by
total memory - free - caches - buffers; 6 uwsgi workers), both
immediately after restarting Django, and after requesting `/` 60 times
with 6 concurrent requests:
| Non-lazy | Lazy app | Difference
------------------+------------+------------+-------------
Fresh | 2,827,216 | 2,870,480 | +43,264
After 60 requests | 3,332,284 | 3,409,608 | +77,324
..................|............|............|.............
Difference | +505,068 | +539,128 | +34,060
That is, "lazy app" loading increased the footprint pre-requests by
43MB, and after 60 requests grew the memory footprint by 539MB, as
opposed to non-lazy loading, which grew it by 505MB. Using wsgi "lazy
app" loading does increase the memory footprint, but not by a large
percentage.
The other effect is that processes may be served by either old or new
code during the restart window. This may cause transient failures
when new frontend code talks to old backend code.
Enable chain-reloading during graceful, puppetless restarts, but only
if enabled via a zulip.conf configuration flag.
Fixes#2559.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#chain-reloading-lazy-apps
Stopping both `zulip-tornado` and `zulip-tornado:*` causes errors on
deploys with tornado sharding, as the plain `zulip-tornado` service
does not exist.
Pass `zulip-tornado:*`, which matches both plain `zulip-tornado`, as
well as the sharded `zulip-tornado:zulip-tornado-port-9800` cases.
Running `supervisorctl stop` or `supervisorctl restart` on a process
name which is not known is an error:
```
$ supervisorctl stop nonexistent-process
nonexistent-process: ERROR (no such process)
$ echo $?
1
```
ef6d0ec5ca moved
zulip_deliver_scheduled_* out of the `workers:` group. Since upgrades
run `stop-server` before applying puppet, the list of processes at
that time is from the previous version of Zulip, so may not have the
new `zulip_deliver_scheduled_*` names -- and the `stop-server` will
hence fail.
If the upgrade is not applying puppet, it will `restart-server`. At
that point, the old names will still be in the configuration, so
relying on the current `superisorctl status` is the best gauge of what
exists to restart.
In short, only ever stop/start/restart the `zulip_deliver_scheduled_*`
processes if `supervisorctl status` knows about them already.
This command is part of a statsd infrastructure that we stopped
supporting years ago. Its only purpose for some time has been to
provide sample code for how the restart script might trigger a
notification to a graphing system, which doesn't justify maintaining
it.
Fixes part of #18898.
Staging and other hosts that are `zulip::app_frontend_base` but not
`zulip::app_frontend_once` do not have a
/etc/supervisor/conf.d/zulip/zulip-once.conf and as such do not have
`zulip_deliver_scheduled_emails` or `zulip_deliver_scheduled_messages`
and thus supervisor will fail to reload.
Making the contents of `zulip-workers` contingent on if the server is
_also_ a `-once` server is complicated, and would involve using Concat
fragments, which severely limit readability.
Instead, expel those two from `zulip-workers`; this is somewhat
reasonable, since they are use an entirely different codepath from
zulip_events_*, using the database rather than RabbitMQ for their
queuing.
When upgrading from a pre-4.0 release, scripts/stop-server logic would
check whether supervisord configuration files were present to
determine what it needed to restart, but only considered paths to
those files that are introduced in Zulip 4.0.
Fixed#18493.
During the upgrade process of a postgresql-only Zulip installation,
(`puppet_classes = zulip::profile::postgresql` in
`/etc/zulip/zulip.conf`) either `scripts/start-server` or
`scripts/stop-server` fail because they try to handle supervisor
services that are not available (e.g. Tornado) since only
`/etc/supervisor/conf.d/zulip/zulip_db.conf` is present and not
`/etc/supervisor/conf.d/zulip/zulip.conf`.
While this wasn't previously supported, it's a pretty reasonable thing
to do, and can be readily supported by just adding a few conditionals.
Matching the full process name (-x without -f) or full command
line (-xf) is less prone to mistakes like matching a random substring
of some other command line or pgrep matching itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Instead of taking the "onion" approach, where all services are
stopped, and then started back up again, default to a rolling restart
across all processes. This draws out how long the overall "restart"
takes, but minimizes the time that any of the services are down. This
minimizes user-visible impact and queue buildup.
In cases where speed is more important than minimal impact (for
example, there is already a current outage), a --less-graceful flag is
provided, which brings the services down more suddenly, and back up in
a still-correct order.
In general, `./scripts/restart-server` will already work in any
circumstance where the server is already stopped and needs to be
started. However, it will output a couple minor warnings, and it is
not readily obvious that it *will* work correctly.
Add an alias for `restart-server` named `start-server`, for
parallelism with `stop-server`, which omits the steps of
`restart-server` which would stop the server first.
The path which contains all of the Zulip supervisor files changed in
3ab9b31d2f to make it easier to purge
now-unwanted supervisor configuration files. However, the paths that
the zulip upgrade process, and restart-server, look at were not
adjusted.
Fix the supervisor configuration file paths.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
`supervisorctl` starts and stops its arguments sequentially, in the
order they are passed[1]. Start them in the opposite order from the
order in which they were stopped -- this puts the dependencies first,
and the most core services (`zulip-django`) last.
While the only "dependency" here is currently thumbor, this sets us up
in case others are added later.
[1] https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py#L782
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>