Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `pg_upgrade` tool uses `pg_dump` as an internal step, and verifies
that the version of `pg_upgrade` is the same exactly the same as the
version of the PostgreSQL server it is upgrading to. A mismatch (even
in packaging versions) leads to it aborting:
```
/usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/13/bin -B /usr/lib/postgresql/14/bin -p 5432 -P 5435 -d /etc/postgresql/13/main -D /etc/postgresql/14/main --link
Finding the real data directory for the source cluster ok
Finding the real data directory for the target cluster ok
check for "/usr/lib/postgresql/14/bin/pg_dump" failed: incorrect version: found "pg_dump (PostgreSQL) 14.6 (Ubuntu 14.6-0ubuntu0.22.04.1)", expected "pg_dump (PostgreSQL) 14.6 (Ubuntu 14.6-1.pgdg22.04+1)"
Failure, exiting
```
Explicitly upgrade `postgresql-client` at the same time we upgrade
`postgresql` itself, so their versions match.
Fixes: #24192
This was last really used in d7a3570c7e, in 2013, when it was
`/home/humbug/logs`.
Repoint the one obscure piece of tooling that writes there, and remove
the places that created it.
`check_version` in `install-yarn` had the rather careful check that
the yarn it installed into `/usr/bin/yarn` was the yarn which was
first in the user's `$PATH`. This caused problems when the user had a
pre-existing `/usr/local/bin/yarn`; however, those problems are
limited to the `install-yarn` script itself, since the nearly all
calls to yarn from Zulip's code already hardcode the `/srv/zulip-yarn`
location, and do not depend on what is in `$PATH`.
Remove the checks in `install-yarn` that depend on the local `$PATH`,
and stop installing our `yarn` into it. We also adjust the two
callsites which did not specify the full path to `yarn`, so use
`/srv/zulip-yarn`.
Fixes: #23993
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
During installation on a new host, `create-database` attempts to
verify that there isn't a bunch of data already in the database which
is it about to drop and recreate. In the most common case, this
statement emits a scary-looking warning, since the database does not
exist yet:
```
+ /home/zulip/deployments/current/scripts/setup/create-database
+ POSTGRES_USER=postgres
++ crudini --get /etc/zulip/zulip.conf postgresql database_name
++ echo zulip
+ DATABASE_NAME=zulip
++ crudini --get /etc/zulip/zulip.conf postgresql database_user
++ echo zulip
+ DATABASE_USER=zulip
++ cd /
++ su postgres -c 'psql -v ON_ERROR_STOP=1 -Atc '\''SELECT COUNT(*) FROM zulip.zerver_message;'\'' zulip'
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: database "zulip" does not exist
```
Because we are attempting to gracefully handle the case where the
database does not exist yet, we also continue (and drop the database)
in other, less expected cases -- for instance, if database contains a
schema we do not expect.
Explicitly check for the database existence first, and once we verify
that, allow any further failures in the `SELECT COUNT(*)` to abort
`create-database`. This serves the dual purpose of hiding the "FATAL"
error for the common case when the database does not exist, as well as
preventing dropping the database if anything else goes awry.
If a previous attempt at an upgrade failed for some reason, the new
PostgreSQL may be installed, and the conversion will succeed, but the
new PostgreSQL daemon will not be running (Puppet does not force it to
start). This causes the upgrade to fail when analyzing statistics,
since the daemon isn't running.
Explicitly start the new PostgreSQL; this does nothing in most cases,
but will provider better resiliency when recovering from previous
partial upgrades.
Some terminals (e.g. ssh from OS X) set an invalid locale, which
causes the `pg_upgradecluster` call late in the upgrade to fail.
Force a known locale, for consistency. This mirrors the settings in
upgrade-zulip-stage-2, set in 11ab545f3b, and its subsequent
cleanups in 64c608a51a, ee0f4ca330, and eda9ce2364.
‘exit’ is pulled in for the interactive interpreter as a side effect
of the site module; this can be disabled with python -S and shouldn’t
be relied on.
Also, use the NoReturn type where appropriate.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Starting with wal-g 2.0.1, they provide `aarch64` assets[^1].
Effectively revert d7b59c86ce, and use
the pre-built binary for `aarch64` rather than spend a bunch of space
and time having to build it from source.
[^1]: https://github.com/wal-g/wal-g/releases/tag/v2.0.1
If there is a syntax error in `settings.py`, `restart-server` should
provide a reasonable message about this. It did so prior to
af08bcdb3f, becausde any invocation `./manage.py` without
`--skip-checks` will verify `settings.py`, among several other checks.
After af08bcdb3f, there are no `./manage.py` calls in most restarts,
which fa77be6e6c took further.
Add an explicit `./manage.py check` in the default case.
upgrade-zulip-stage-2 overrides this by passing `--skip-checks`, for
performance. This also means that `upgrade-zulip-from-git` itself
picks up the same `--skip-checks` flag, since it inherits the same
flag parsing, though that is perhaps of dubious utility.
Although Node.js 18 is not the active LTS release for another 3 weeks,
the Node.js 16 end-of-life date was moved forward to September 2023,
(https://nodejs.org/en/blog/announcements/nodejs16-eol/), so it seems
prudent to switch now.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “E713 Test for membership should be `not in`” found by ruff (now
that I’ve fixed it not to ignore scripts lacking a .py extension).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “E713 Test for membership should be `not in`” found by
ruff (https://github.com/charliermarsh/ruff).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
One should now be able to configure a regex by appending _regex to the
port number:
[tornado_sharding]
9802_regex = ^[l-p].*\.zulipchat\.com$
Signed-off-by: Anders Kaseorg <anders@zulip.com>
https://nginx.org/en/docs/http/ngx_http_map_module.html
Since Puppet doesn’t manage the contents of nginx_sharding.conf after
its initial creation, it needs to be renamed so we can give it
different default contents.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This implements get_mandatory_secret that ensures SHARED_SECRET is
set when we hit zerver.decorator.authenticate_notify. To avoid getting
ZulipSettingsError when setting up the secrets, we set an environment
variable DISABLE_MANDATORY_SECRET_CHECK to skip the check and default
its value to an empty string.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This has never actually been used -- and does not make sense with the
check-all-queues-at-once model switched to in 88a123d5e0. The
Tornado processes are the only ones we expect to be non-1, and since
they were added in 3f03dcdf5e the right number has been read from
config, not passed as an argument.
`postgresql-14.4` is a notable upgrade in the PostgreSQL series, as it
fixes potential database corruption from `CREATE INDEX CONCURRENTLY`
statements which are run while rows are modified[1]. However, it also
requires an upgrade from `libllvm9` to `libllvm10`, which means it is
not installed by a mere `apt-get upgrade`.
Add the `--with-new-pkgs` flag to all of the potentially relevant
`apt-get upgrade` calls, so that this (and similar) packages are
upgraded successfully.
[1]: https://www.postgresql.org/docs/release/14.4/
30457ecd02 removed the `--mirror` from
initial clones, but did not add back `--bare`, which `--mirror`
implies. This leads to `/srv/zulip.git` having a working tree in it,
with a `/srv/zulip.git/.git` directory.
This is mostly harmless, and since the bug was recent, not worth
introducing additional complexity into the upgrade process to handle.
Calling `git clone --bare`, however, would clone the refs into
`refs/heads/`, not the `refs/remotes/origin/` we want. Instead, use
`git init --bare`, followed by `git remote add origin`. The remote
will be fetched by the usual `git fetch --all --prune` which is below.
While the `remote.origin.mirror` boolean being set is a very good
proxy for having been cloned with `--mirror`, is technically only used
when pushing into the remote[1]. What we care about is if fetches
from this remote will overwrite `refs/heads/`, or all of `refs/` --
the latter of which is most likely, from having run `git clone
--bare`.
Detect either of these fetch refspecs, and not the mirror flag. We
let the upgrade process error out if `remote.origin.fetch` is unset,
as that represents an unexpected state. We ignore failures to unset
the `remote.origin.mirror` flag, in case it is not set already.
[1]: https://git-scm.com/docs/git-config#Documentation/git-config.txt-remoteltnamegtmirror
The local `/srv/zulip.git` directory has been cloned with `--mirror`
since it was first created as a local cache in dc4b89fb08. This
made some sense at the time, since it was purely a cache of the
remote, and not a home to local branches of its own.
That changed in 3f83b843c2, when we began using `git worktree`,
which caused the `deployment-...` branches to begin being stored in
`/src/zulip.git`. This caused intermixing of local and remote
branches.
When 02582c6956 landed, the addition of `--prune` caused all but the
most recent deployment branch to be deleted upon every fetch --
leaving previous deployments with non-existent branches checked out:
```
zulip@example-prod-host:~/deployments/last$ git status
On branch deployment-2022-04-15-23-07-55
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: .browserslistrc
new file: .codecov.yml
new file: .codespellignore
new file: .editorconfig
[...snip list of every file in repo...]
```
Switch `/srv/zulip.git` to no longer be a `--mirror` cache of the
origin. We reconfigure the remote to drop `remote.origin.mirror`, and
delete all refs under `refs/pulls/` and `refs/heads/`, while
preserving any checked-out branches. `refs/pulls/`, if the remote is
the canonical upstream, contains _tens of thousands_ of refs, so
pruning those refs trims off 20% of the repository size.
Those savings require a `git gc --prune=now`, otherwise the dangling
objects are ejected from the packfiles, which would balloon the
repository up to more than three times its previous size. Repacking
the repository is reasonable, in general, after removing such a large
number of refs -- and the `--prune=now` is safe and will not lose
data, as the `--mirror` was good at ensuring that the repository could
not be used for any local state.
The refname in the upgrade process was previously resolved from the
union of local and remote refs, since they were in the same namespace.
We instead now only resolve arguments as tags, then origin branches;
this means that stale local branches will be skipped. Users who want
to deploy from local branches can use `--remote-url=.`.
Because the `scripts/lib/upgrade-zulip-from-git` file is "stage 1" and
run from the old version's code, this will take two invocations of
`upgrade-zulip-from-git` to take effect.
Fixes#21901.
This adds a --skip-restart which makes `deployments/next` in a state
where it can be restarted into, but holds off on conducting that
restart.
This requires many of the same guarantees as `--skip-tornado`, in
terms of there being no Puppet or database schema changes between the
versions. Enforce those with `--skip-restart`, and also broaden both
flags to prevent other, less common changes which nonetheless
potentially might affect the other deploy.
Because Tornado and Django use memcached as a shared cache for
checking session information, they must agree on the prefix used to
store those values.
Subsequent commits will work to ensure that it is always _safe_ to
share that cache.
These are expensive, and moving them to one explicit call early has
considerable time savings in the critical period:
```
$ hyperfine './manage.py fill_memcached_caches' './manage.py fill_memcached_caches --skip-checks'
Benchmark #1: ./manage.py fill_memcached_caches
Time (mean ± σ): 5.264 s ± 0.146 s [User: 4.885 s, System: 0.344 s]
Range (min … max): 5.119 s … 5.569 s 10 runs
Benchmark #2: ./manage.py fill_memcached_caches --skip-checks
Time (mean ± σ): 3.090 s ± 0.089 s [User: 2.853 s, System: 0.214 s]
Range (min … max): 2.950 s … 3.204 s 10 runs
Summary
'./manage.py fill_memcached_caches --skip-checks' ran
1.70 ± 0.07 times faster than './manage.py fill_memcached_caches'
```
Treating the restart as a start is important in reducing the critical
period during upgrades -- we call restart even when we suspect the
services are stopped, because puppet has a small possibility of
placing them in indeterminate state. However, restart orders the
workers first, then tornado/django, which prolongs the outage.
Recognize when no services are currently started, and switch to acting
like a start, not a restart, which places tornado/django first.
This hides ugly output if the services were already stopped:
```
2022-03-25 23:26:04,165 upgrade-zulip-stage-2: Stopping Zulip...
process-fts-updates: ERROR (not running)
zulip-django: ERROR (not running)
zulip_deliver_scheduled_emails: ERROR (not running)
zulip_deliver_scheduled_messages: ERROR (not running)
Zulip stopped successfully!
```
Being able to skip having to shell out to `supervisorctl`, if all
services are already stopped is also a significant performance
improvement.
These have more accurate timestamps, and have user information --
but are harder to parse, and will not show requests when Django or
Tornado is stopped.
This is a script to search nginx log files by server hostname or
client IP address, and output matching lines, all while skipping
common and less-interesting request lines.
As a consequence:
• Bump minimum supported Python version to 3.8.
• Move Vagrant environment to Ubuntu 20.04, which has Python 3.8.
• Move CI frontend tests to Ubuntu 20.04.
• Move production build test to Ubuntu 20.04.
• Move 3.4 upgrade test to Ubuntu 20.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We previously used restart-server if puppet was run, as a nod to the
fact that `supervisor reread && supervisor update` will _start_
service groups that were modified, even if they were previously
stopped; this is because they are marked as `autostart=true`, which is
honored on service change.
However, upgrades want to run while there are no services running. If
puppet is run, explicitly set the server as potentially being "up", so
that a `shutdown_server()` before migrations, if they exist, will stop
services.
7c4293a7d3 switched to checking if the
service was already running, and use `supervisorctl start` if it was
not.
Unfortunately, `list_supervisor_processes("zulip-tornado:*")` did not
include `zulip-tornado`, and as such a non-sharded process was always
considered to _not_ be running, and was thus started, not restarted.
Starting an already-started service is a no-op, and thus non-sharded
tornado processes were never restarted.
The observed behaviour is that requests to the tornado process attempt
to load the user from the cache, with a different prefix from Django,
and immediately invalidate the session and eject the user back to the
login page.
Fix the `list_supervisor_processes` logic to match without the
trailing `:*`.
We had skipped these in #14693 so we could keep generating a friendly
error on Python 3.5, but we gave that up in #19801.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Since wal-g does not provide binaries for aarch64, build them from
source. While building them from source for arm64 would better ensure
that build process is tested, the build process takes 7min and 700M of
temp files, which is an unacceptable cost; we thus only build on
aarch64.
Since the wal-g build process uses submodules, which are not in the
Github export, we clone the full wal-g repository. Because the
repository is relatively small, we clone it anew on each new version,
rather than attempt to manage the remotes.
Fixes#21070.
stop-server and restart-server address all services which talk to the
database, and are thus more correct than restarting or stopping
everything in supervisor.
This is possible now that the previous commit ensures that the zulip
user can read the zulip installation directory during
`create-database`; previously, that directory was still owned by root
when `create-database` was run, whereas now it is in
`~zulip/deployments/`.
Move database creation to immediately before database initialization;
this means it happens in a directory readable by the `zulip` user, as
well as placing it alongside similar operations. It removes the check
for the `zulip::postgresql_common` Puppet class; instead it keeps the
check for `--no-init-db`, and switches to require
`zulip::app_frontend_base`. This is a behavior change for any install
of `zulip::postgresql_common`-only classes, but that is not a common
form -- and such installs likely already pass `--no-init-db` because
they are warm spare replicas.
As a result, all non-`zulip::app_frontend_base` installs now skip
database initialization, even without `--no-init-db`. This is clearly
correct for, e.g. Redis-only hosts, and makes clearer that the
frontend, not the database host, is responsible for database
initialization.
rabbitmqctl ping only checks that the Erlang process is registered
with epmd. There’s a window after that where the rabbit app is still
starting inside it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This was added in c770bdaa3a, and we
have not added any realm-internal bots since
c770bdaa3a.
Speed up the critical period during upgrades by skipping this step.
Using an absolute `ZULIP_SCRIPTS` path when computing sha245sums
results in a set of hashes which varies based on the path that the
script is called as. This means that each deploy _always_ has
`setup-apt-repo --verify` fail, since it is a different base path.
Make all paths passed to sha256sum be relative to the repository root,
ensuring they can be compared across runs.
In some instances (e.g. during upgrades) we run `restart-server` and
not `start-server`, even though we expect the server to most likely
already be stopped. `supervisorctl restart servicename` if the
service is stopped produces the perhaps-alarming message:
```
restart-server: Restarting servicename
servicename: ERROR (not running)
servicename: started
```
This may cause operators to worry that something is broken, when it is
not.
Check if the service is already running, and switch from "restart" to
"start" in cases where it is not.
The race condition here is safe -- if the service transitions from
stopped to started between the check and the `start` call, it will
merely output:
```
servicename: ERROR (already started)
```
...and continue, as that has exit status 0.
If the service transitions from started to stopped between the check
and the `restart` call, we are merely back in the current case, where
it outputs:
```
servicename: ERROR (not running)
servicename: started
```
In none of these cases does a call to "restart" fail to result in the
service being stopped and then started.
Ubuntu 22.04 pushed a post-feature-freeze update to Python 3.10,
breaking virtual environments in a Debian patch
(https://bugs.launchpad.net/ubuntu/+source/python3.10/+bug/1962791).
Also, our antique version of Tornado doesn’t work in 3.10, and we’ll
need to do some work to upgrade that.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We can safely ignore the presence of the extra tables that could be
left behind in the database from when we had these installed (before
Zulip 1.7.0 and 2.0.0, respectively).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This was not needed for OpenSSL ≥ 1.1.1 (all our supported platforms),
and breaks with OpenSSL ≥ 3.0.0 (Ubuntu 22.04). It was removed from
the upstream configuration file too: https://bugs.debian.org/990228.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This was only used for upgrading from Zulip < 1.9.0, which is no
longer possible because Zulip < 2.1.0 had no common supported
platforms with current main.
If we ever want this optimization for a future migration, it would be
better implemented using Django merge migrations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Attempting to "upgrade" from `main` to 4.x should abort; Django does
not prevent running old code against the new database (though it
likely errors at runtime), and `./manage.py migrate` from the old
version during the "upgrade" does not downgrade the database, since
the migrations are entirely missing in that directory, so don't get
reversed.
Compare the list of applied migrations to the list of on-disk
migrations, and abort if there are applied migrations which are not
found on disk.
Fixes: #19284.
Services like go-camo and smokescreen are not stopped in stop-server,
since they are upgraded and restarted by puppet application. As such,
they also do not appear in start-server, despite the server relying on
them to be running to function properly.
Ensure those services are started, by starting them in start-server,
if they are configured in supervisor on the host.
For many uses, shelling out to `supervisorctl` is going to produce
better error messages. However, for instances where we wish to parse
the output of `supervisorctl`, using the API directly is less brittle.
The RabbitMQ docs state ([1]):
RabbitMQ nodes and CLI tools (e.g. rabbitmqctl) use a cookie to
determine whether they are allowed to communicate with each
other. [...] The cookie is just a string of alphanumeric
characters up to 255 characters in size. It is usually stored in a
local file.
...and goes on to state (emphasis ours):
If the file does not exist, Erlang VM will try to create one with
a randomly generated value when the RabbitMQ server starts
up. Using such generated cookie files are **appropriate in
development environments only.**
The auto-generated cookie does not use cryptographic sources of
randomness, and generates 20 characters of `[A-Z]`. Because of a
semi-predictable seed, the entropy of this password is thus less than
the idealized 26^20 = 94 bits of entropy; in actuality, it is 36 bits
of entropy, or potentially as low as 20 if the performance of the
server is known.
These sizes are well within the scope of remote brute-force attacks.
On provision, install, and upgrade, replace the default insecure
20-character Erlang cookie with a cryptographically secure
255-character string (the max length allowed).
[1] https://www.rabbitmq.com/clustering.html#erlang-cookie
5c450afd2d, in ancient history, switched from `check_call` to
`check_output` and throwing away its result.
Use check_call, so that we show the steps to (re)starting the server.
This is required in order to lock down the RabbitMQ port to only
listen on localhost. If the nodename is `rabbit@hostname`, in most
circumstances the hostname will resolve to an external IP, which the
rabbitmq port will not be bound to.
Installs which used `rabbit@hostname`, due to RabbitMQ having been
installed before Zulip, would not have functioned if the host or
RabbitMQ service was restarted, as the localhost restrictions in the
RabbitMQ configuration would have made rabbitmqctl (and Zulip cron
jobs that call it) unable to find the rabbitmq server.
The previous commit ensures that configure-rabbitmq is re-run after
the nodename has changed. However, rabbitmq needs to be stopped
before `rabbitmq-env.conf` is changed; we use an `onlyif` on an `exec`
to print the warning about the node change, and let the subsequent
config change and notify of the service and configure-rabbitmq to
complete the re-configuration.
`/etc/rabbitmq/rabbitmq-env.conf` sets the nodename; anytime the
nodename changes, the backing database changes, and this requires
re-creating the rabbitmq users and permissions.
Trigger this in puppet by running configure-rabbitmq after the file
changes.
This reverts commit 889547ff5e. It is
unused in the Docker container, as the configurtaion of the `zulip`
user in the rabbitmq node is done via environment variables. The
Zulip host in that context does not have `rabbitmqctl` installed, and
would have needed to know the Erlang cookie to be able to run these
commands.
This addresses the problems mentioned in the previous commit, but for
existing installations which have `authenticator = standalone` in
their configurations.
This reconfigures all hostnames in certbot to use the webroot
authenticator, and attempts to force-renew their certificates.
Force-renewal is necessary because certbot contains no way to merely
update the configuration. Let's Encrypt allows for multiple extra
renewals per week, so this is a reasonable cost.
Because the certbot configuration is `configobj`, and not
`configparser`, we have no way to easily parse to determine if webroot
is in use; additionally, `certbot certificates` does not provide this
information. We use `grep`, on the assumption that this will catch
nearly all cases.
It is possible that this will find `authenticator = standalone`
certificates which are managed by Certbot, but not Zulip certificates.
These certificates would also fail to renew while Zulip is running, so
switching them to use the Zulip webroot would still be an improvement.
Fixes#20593.
Installing certbot with --method=standalone means that the
configuration file will be written to assume that the standalone
method will be used going forward. Since nginx will be running,
attempts to renew the certificate will fail.
Install a temporary self-signed certificate, just to allow nginx to
start, and then follow up (after applying puppet to start nginx) with
the call to setup-certbot, which will use the webroot authenticator.
The `setup-certbot --method=standalone` option is left intact, for use
in development environments.
Fixes part of #20593; it does not address installs which were
previously improperly configured with `authenticator = standalone`.
As a consequence:
• Bump minimum supported Python version to 3.7.
• Move Vagrant environment to Debian 10, which has Python 3.7.
• Move CI frontend tests to Debian 10.
• Move production build test to Debian 10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This catches nine functional indexes that the previous query didn’t:
upper_preregistration_email_idx
upper_stream_name_idx
upper_subject_idx
upper_userprofile_email_idx
zerver_message_recipient_upper_subject
zerver_mutedtopic_stream_topic
zerver_stream_realm_id_name_uniq
zerver_userprofile_realm_id_delivery_email_uniq
zerver_userprofile_realm_id_email_uniq
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Restarting the uwsgi processes by way of supervisor opens a window
during which nginx 502's all responses. uwsgi has a configuration
called "chain reloading" which allows for rolling restart of the uwsgi
processes, such that only one process at once in unavailable; see
uwsgi documentation ([1]).
The tradeoff is that this requires that the uwsgi processes load the
libraries after forking, rather than before ("lazy apps"); in theory
this can lead to larger memory footprints, since they are not shared.
In practice, as Django defers much of the loading, this is not as much
of an issue. In a very basic test of memory consumption (measured by
total memory - free - caches - buffers; 6 uwsgi workers), both
immediately after restarting Django, and after requesting `/` 60 times
with 6 concurrent requests:
| Non-lazy | Lazy app | Difference
------------------+------------+------------+-------------
Fresh | 2,827,216 | 2,870,480 | +43,264
After 60 requests | 3,332,284 | 3,409,608 | +77,324
..................|............|............|.............
Difference | +505,068 | +539,128 | +34,060
That is, "lazy app" loading increased the footprint pre-requests by
43MB, and after 60 requests grew the memory footprint by 539MB, as
opposed to non-lazy loading, which grew it by 505MB. Using wsgi "lazy
app" loading does increase the memory footprint, but not by a large
percentage.
The other effect is that processes may be served by either old or new
code during the restart window. This may cause transient failures
when new frontend code talks to old backend code.
Enable chain-reloading during graceful, puppetless restarts, but only
if enabled via a zulip.conf configuration flag.
Fixes#2559.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#chain-reloading-lazy-apps
On a system where ‘apt-get update’ has never been run, ‘apt-cache
policy’ may show no repositories at all. Try to correct this with
‘apt-get update’ before giving up.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
If nginx was already installed, and we're using the webroot method of
initializing certbot, nginx needs to be reloaded. Hooks in
`/etc/letsencrypt/renewal-hooks/deploy/` do not run during initial
`certbot certonly`, so an explicit reload is required.
The certbot package installs its own systemd timer (and cron job,
which disabled itself if systemd is enabled) which updates
certificates. This process races with the cron job which Zulip
installs -- the only difference being that Zulip respects the
`certbot.auto_renew` setting, and that it passes the deploy hook.
This means that occasionally nginx would not be reloaded, when the
systemd timer caught the expiration first.
Remove the custom cron job and `certbot-maybe-renew` script, and
reconfigure certbot to always reload nginx after deploying, using
certbot directory hooks.
Since `certbot.auto_renew` can't have an effect, remove the setting.
In turn, this removes the need for `--no-zulip-conf` to
`setup-certbot`. `--deploy-hook` is similarly removed, as running
deploy hooks to restart nginx is now the default; pass
`--no-directory-hooks` in standalone mode to not attempt to reload
nginx. The other property of `--deploy-hook`, of skipping symlinking
into place, is given its own flog.
We've had a number of unhappy reports of upgrades failing due to
webpack requiring too much memory. While the previous commit will
likely fix this issue for everyone, it's worth improving the error
message for failures here.
We avoid doing the stop+retry ourselves, because that could cause an
outage in a production system if webpack fails for another reason.
Fixes#20105.
Since the upgrade to Webpack 5, we've been seeing occasional reports
that servers with roughly 4GiB of RAM were getting OOM kills while
running webpack.
Since we can't readily optimize the memory requirements for webpack
itself, we should raise the RAM requirements for doing the
lower-downtime upgrade strategy.
Fixes#20231.
Stopping both `zulip-tornado` and `zulip-tornado:*` causes errors on
deploys with tornado sharding, as the plain `zulip-tornado` service
does not exist.
Pass `zulip-tornado:*`, which matches both plain `zulip-tornado`, as
well as the sharded `zulip-tornado:zulip-tornado-port-9800` cases.
The `analyze_new_cluster.sh` script output by `pg_upgrade` just runs
`vacuumdb --all --analyze-in-stages`, which runs three passes over the
database, getting better stats each time. Each of these passes is
independent; the third pass does not require the first two.
`--analyze-in-stages` is only provided to get "something" into the
database, on the theory that it could then be started and used. Since
we wait for all three passes to complete before starting the database,
the first two passes add no value.
Additionally, PosttgreSQL 14 and up stop writing the
`analyze_new_cluster.sh` script as part of `pg_upgrade`, suggesting
the equivalent `vacuumdb --all --analyze-in-stages` call instead.
Switch to explicitly call `vacuumdb --all --analyze-only`, since we do
not gain any benefit from `--analyze-in-stages`. We also enable
parallelism, with `--jobs 10`, in order to analyze up to 10 tables in
parallel. This may increase load, but will accelerate the upgrade
process.
scripts.lib.node_cache expects Yarn to be in /srv/zulip-yarn, so if
it’s installed somewhere else, even if it’s the right version, we need
to reinstall it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
It recently started failing on Debian 10 (buster). We immediately
follow this by replacing these packages with our own versions from
pip.txt, anyway.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The support for bullseye was added in #17951
but it was not documented as bullseye was
frozen and did not have proper configuration
files, hence wasn't documented.
Since now bullseye is released as a stable
version, it's support can be documented.