This adds a --skip-restart which makes `deployments/next` in a state
where it can be restarted into, but holds off on conducting that
restart.
This requires many of the same guarantees as `--skip-tornado`, in
terms of there being no Puppet or database schema changes between the
versions. Enforce those with `--skip-restart`, and also broaden both
flags to prevent other, less common changes which nonetheless
potentially might affect the other deploy.
Because Tornado and Django use memcached as a shared cache for
checking session information, they must agree on the prefix used to
store those values.
Subsequent commits will work to ensure that it is always _safe_ to
share that cache.
These are expensive, and moving them to one explicit call early has
considerable time savings in the critical period:
```
$ hyperfine './manage.py fill_memcached_caches' './manage.py fill_memcached_caches --skip-checks'
Benchmark #1: ./manage.py fill_memcached_caches
Time (mean ± σ): 5.264 s ± 0.146 s [User: 4.885 s, System: 0.344 s]
Range (min … max): 5.119 s … 5.569 s 10 runs
Benchmark #2: ./manage.py fill_memcached_caches --skip-checks
Time (mean ± σ): 3.090 s ± 0.089 s [User: 2.853 s, System: 0.214 s]
Range (min … max): 2.950 s … 3.204 s 10 runs
Summary
'./manage.py fill_memcached_caches --skip-checks' ran
1.70 ± 0.07 times faster than './manage.py fill_memcached_caches'
```
Treating the restart as a start is important in reducing the critical
period during upgrades -- we call restart even when we suspect the
services are stopped, because puppet has a small possibility of
placing them in indeterminate state. However, restart orders the
workers first, then tornado/django, which prolongs the outage.
Recognize when no services are currently started, and switch to acting
like a start, not a restart, which places tornado/django first.
This hides ugly output if the services were already stopped:
```
2022-03-25 23:26:04,165 upgrade-zulip-stage-2: Stopping Zulip...
process-fts-updates: ERROR (not running)
zulip-django: ERROR (not running)
zulip_deliver_scheduled_emails: ERROR (not running)
zulip_deliver_scheduled_messages: ERROR (not running)
Zulip stopped successfully!
```
Being able to skip having to shell out to `supervisorctl`, if all
services are already stopped is also a significant performance
improvement.
It’s only used by jsonschema >= 4.2.0, but current semgrep holds
jsonschema ~= 3.2:
https://github.com/returntocorp/semgrep/issues/4739
Not bothering to bump PROVISION_VERSION because it’s not important
whether this backport is installed.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adds `create_web_public_stream_policy` to the `get-events` API
documentation for the `realm op:update` event.
Also, fixes changelog entries for feature levels 103 and 104,
which are related to the API documentation changes or fix an
error in references to the undocumented endpoint `PATCH /realm`.
The production CI image starts `rabbitmq-server` but does not stop it,
which leaves a stale `/var/run/rabbitmq/pid` file in the image.
`rabbitmqctl wait --timeout 600 /var/run/rabbitmq/pid`, which is run
after starting the rabbitmq node, reads the PID file and waits for the
PID to be running, and for rabbitmq's port to be responding to pings.
If it reads an old PID file before the new PID is written, it
aborts (all but the first and last lines are output from `rabbitmqctl
wait` that is hidden by `/etc/init.d/rabbitmq-server`):
```
* Starting RabbitMQ Messaging Server rabbitmq-server
Waiting for pid file '/var/run/rabbitmq/pid' to appear
pid is 341
Waiting for erlang distribution on node 'rabbit@fc8f64d6acdb' while OS process '341' is running
Error:
process_not_running
* FAILED - check /var/log/rabbitmq/startup_\{log, _err\}
```
If it failed, the `production-upgrade` script tried to start
`rabbitmq` again -- despite it already still starting in the
background. These two attempts conflicted, and often one or both
failed.
Stop `rabbitmq-server` when building the image, which removes the
stale PID file.
We remove one call to get_occupied_streams to get occupied
streams before unsubscribing because we already know which
streams can become vacant, i.e. the one from which users are
being unsubscribed, and we can directly use the list of streams
from which users are being unsubscribed and get vacant streams
by checking which of these streams are not in get_occupied_streams
called after unsubscribing users.
This is a reprise of c97162e485, but for the case where certbot
certs are no longer in use by way of enabling `http_only` and letting
another server handle TLS termination.
Fixes: #22034.
This allows system-level configuration to be done by `apt-get install`
of nginx modules, which place their load statements in this directory.
The initial import in ed0cb0a5f8 of the stock nginx config omitted
this include -- one potential explanation was in an effort to reduce
the memory footprint of the server.
The default nginx install enables:
50-mod-http-auth-pam.conf
50-mod-http-dav-ext.conf
50-mod-http-echo.conf
50-mod-http-geoip2.conf
50-mod-http-geoip.conf
50-mod-http-image-filter.conf
50-mod-http-subs-filter.conf
50-mod-http-upstream-fair.conf
50-mod-http-xslt-filter.conf
50-mod-mail.conf
50-mod-stream.conf
While Zulip doesn't actively use any of these, they likely don't do
any harm to simply be loaded -- they are loaded into every nginx by
default.
Having the `modules-enabled` include allows easier extension of the
server, as neither of the existing wildcard
includes (`/etc/nginx/conf.d/*.conf` and
`/etc/nginx/zulip-include/app.d/*.conf`) are in the top context, and
thus able to load modules.
We directly pass the user group object to get_recursive_subgroups as we
already have the object in the caller. We can add separate function which
will accept id as parameter in the future if required.
This commit renames existing_subgroups variable to existing_direct_subgroup_ids
in add_subgroups_to_group_backend and remove_subgroups_from_group_backend functions
for better readability.
Initializing the Zulip client opens a long-lived TCP connection due to
connection pooling in urllib3. In Github Actions, the network kills
such requests after ~270s, making the later `send_message` call fail.
Use a singular call to `zulip.Client()` early on to verify the
credentials, and do not cache the resulting client object. Instead,
re-create it during the final step when it is needed, so we do not run
afoul of bad TCP connection state.
This would ideally be fixed via connection keepalive or retry at the
level of the Zulip module.
54b6a83412 fixed the typo introduced in 49ad188449, but that does
not clean up existing installs which had the file with the wrong name
already.
Remove the file with the typo'd name, so two jobs do not race, and fix
the typo in the comment.
Django caches some information on HttpRequest objects, including the
headers dict, under the assumption that requests won’t be reused.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The top-level `chdir` setting only does the chdir once, at initial
`uwsgi` startup time. Rolling restarts, however, however, require
that `uwsgi` pick up the _new_ value of the `current` directory, and
start new workers in that directory -- as currently implemented,
rolling restarts cannot restart into newer versions of the code, only
the same one in which they were started.
Use [configurable hooks][1] to execute the `chdir` after every fork.
This causes the following behaviour:
```
Thu May 12 18:56:55 2022 - chain reload starting...
Thu May 12 18:56:55 2022 - chain next victim is worker 1
Gracefully killing worker 1 (pid: 1757689)...
worker 1 killed successfully (pid: 1757689)
Respawned uWSGI worker 1 (new pid: 1757969)
Thu May 12 18:56:56 2022 - chain is still waiting for worker 1...
running "chdir:/home/zulip/deployments/current" (post-fork)...
Thu May 12 18:56:57 2022 - chain is still waiting for worker 1...
Thu May 12 18:56:58 2022 - chain is still waiting for worker 1...
Thu May 12 18:56:59 2022 - chain is still waiting for worker 1...
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x55dfca409170 pid: 1757969 (default app)
Thu May 12 18:57:00 2022 - chain next victim is worker 2
[...]
```
..and so forth down the line of processes. Each process is correctly
started in the _current_ value of `current`, and thus picks up the
correct code.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/Hooks.html
Previously, we were marking messages of all the streams passed
to bulk_remove_subscriptions even if user was not subscribed
to some of them and those streams would ideally not have
any unread messages. This code was added in 766511e519.
This commit changes the code to only mark messages of actually
unsubscribed streams as read.