Filling caches needs to happen close to when the server is restarted,
as the gap opens us up to race conditions with user modifications. If
there are migrations, however, it must happen within the critical
period after the migrations are applied.
Move the call to fill the caches to within the `shutdown_server`
function, so that we push it as close to the server shutdown as
possible.
This can happen if `machine.pgroonga` is set during initial
installation. We cannot run `CREATE EXTENSION PGROONGA` because the
database that we need to run that statement in does not exist yet;
make the command a silent no-op that does not create the
`pgroonga_setup.sql.applied` flag file, such that a later
`zulip-puppet-apply` once the database exists can pick up and install
the extension.
Tweaked provision script to run successfully in Fedora 38 and
included a script to build the groonga libs from source because
the packages in Fedora repos are outdated.
There is a major version jump from the last supported version (F34)
which is EOL so references and support for older versions were
removed.
Fixes: #20635
nginx sets the value of the `$http_host` variable to the empty string
when using http/3, as there is technically no `Host:` header sent:
https://github.com/nginx-quic/nginx-quic/issues/3
Users with a browser that support http/3 will send their first request
to nginx with http/2, and get an expected HTTP 200 -- but any
subsequent requests will fail with am HTTP 400, since the browser will
have upgraded to http/3, which has an empty `Host` header, which Zulip
rejects.
Switch to the `$host` variable, which works for all HTTP versions.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The claim in the comment from c8ec3dfcf6, that we can and should use
the current deploy's venv, misses one key case -- when upgrading the
operating system, the current deploy's venv is unworkable, since it
was configured for a previous version of Python. As such, any attempt
to load Django to verify the version of PostgreSQL it is talking to
must happen after the venv is configured.
Move the database version check into
`scripts/lib/check-database-compatibility`, which also moves it after
the new venv is configured.
Because we no longer reliably know, at `apt-get upgrade` time, what
version of PostgreSQL is installed, we hold all versions of the
pgroonga packages.
This ensures that the next `upgrade-zulip-from-git` has access to the
commit history of the initial install, if it was from a forked
repository. `/home/zulip/deployments/current` and `/srv/zulip.git`
are not quite organized into the steady-state that they will have
after one `upgrade-zulip-from-git`:
- `/home/zulip/deployments/current` is its own clone, not a worktree
- `/srv/zulip.git` has an origin of `/home/zulip/deployments/current`
- `remote.origin.mirror` is set on `/srv/zulip.git`
- `remote.origin.fetch` is `+refs/*:refs/*`
All but the first are automatically cleaned up by
`upgrade-zulip-from-git` when it is next run, using the code added in
30457ecd02. The additional complexity of making an existing
independent clone into a worktree seem not worth solving the first
point.
Updating the pgroonga package is not sufficient to upgrade the
extension in PostgreSQL -- an `ALTER EXTENSION pgroonga UPDATE` must
explicitly be run[^1]. Failure to do so can lead to unexpected behavior,
including crashes of PostgreSQL.
Expand on the existing `pgroonga_setup.sql.applied` file, to track
which version of the PostgreSQL extension has been configured. If the
file exists but is empty, we run `ALTER EXTENSION pgroonga UPDATE`
regardless -- if it is a no-op, it still succeeds with a `NOTICE`:
```
zulip=# ALTER EXTENSION pgroonga UPDATE;
NOTICE: version "3.0.8" of extension "pgroonga" is already installed
ALTER EXTENSION
```
The simple `ALTER EXTENSION` is sufficient for the
backwards-compatible case[^1] -- which, for our usage, is every
upgrade since 0.9 -> 1.0. Since version 1.0 was released in 2015,
before pgroonga support was added to Zulip in 2016, we can assume for
the moment that all pgroonga upgrades are backwards-compatible, and
not bother regenerating indexes.
Fixes: #25989.
[^1]: https://pgroonga.github.io/upgrade/
This was only necessary for PGroonga 1.x, and the `pgroonga` schema
will most likely be removed at some point inthe future, which will
make this statement error out.
Drop the unnecessary statement.
If the `postgresql.version` in `/etc/zulip/zulip.conf` is out of date
or wrong, upgrading to the actual current version would drop your
production database without prompting. While we do document taking a
Zulip backup (which includes a database backup) before running
`upgrade-postgresql`[^1], not everyone does so, with possibly
catastrophic consequences.
Do a true end-to-end check of the version in `/etc/zulip/zulip.conf`
by asking Django to query the database for its version, checking that
against the configured value, and aborting if there is any
disagreement.
[^1]: https://zulip.readthedocs.io/en/latest/production/upgrade.html#upgrading-postgresql
If `zulip-puppet-apply` is run during an upgrade, it will immediately
try to re-`stop-server` before running migrations; if the last step in
the puppet application was to restart `supervisor`, it may not be
listening on its UNIX socket yet. In such cases, `socket.connect()`
throws a `FileNotFoundError`:
```
Traceback (most recent call last):
File "./scripts/stop-server", line 53, in <module>
services = list_supervisor_processes(services, only_running=True)
File "./scripts/lib/supervisor.py", line 34, in list_supervisor_processes
processes = rpc().supervisor.getAllProcessInfo()
File "/usr/lib/python3.9/xmlrpc/client.py", line 1116, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python3.9/xmlrpc/client.py", line 1456, in __request
response = self.__transport.request(
File "/usr/lib/python3.9/xmlrpc/client.py", line 1160, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python3.9/xmlrpc/client.py", line 1172, in single_request
http_conn = self.send_request(host, handler, request_body, verbose)
File "/usr/lib/python3.9/xmlrpc/client.py", line 1285, in send_request
self.send_content(connection, request_body)
File "/usr/lib/python3.9/xmlrpc/client.py", line 1315, in send_content
connection.endheaders(request_body)
File "/usr/lib/python3.9/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "./scripts/lib/supervisor.py", line 10, in connect
self.sock.connect(self.host)
FileNotFoundError: [Errno 2] No such file or directory
```
Catch the `FileNotFoundError` and retry twice more, with backoff. If
it fails repeatedly, point to `service supervisor status` for further
debugging, as `FileNotFoundError` is rather misleading -- the file
exists, it simply is not accepting connections.
Instead of copying over a mostly-unchanged `postgresql.conf`, we
transition to deploying a `conf.d/zulip.conf` which contains the
only material changes we made to the file, which were previously
appended to the end.
While shipping separate while `postgresql.conf` files for each
supported version is useful if there is large variety in supported
options between versions, there is not no such variation at current,
and the burden of overriding the entire default configuration is that
it must be keep up to date wit the package's version.
pg_upgradecluster will start the cluster if the old cluster was
started before it ran, or if there are post-upgrade scripts to run.
Because neither of those are fully under our control, only attempt to
start the new cluster if it isn't already.
Installs which are upgrading to current `main`, and are upgrading for
the very first time from an install which was originally from git,
have a `/home/zulip/deployments/current` which, unlike all later
upgrades, is not a `git worktree` of `/srv/zulip.git`, but rather a
direct `git clone` of some arbitrary URL. As such, it does not have
an `upstream` remote, nor a cached `zulip-git-version` file.
This makes later attempts to determine the pre-upgrade revision of
git (for pre-deploy hooks) fail, as without a `zulip-git-version`
file, `ZULIP_VERSION` is insufficiently-specific (e.g. `6.1+git`), and
there is no guarantee the necessary tags exist either.
While we can make fresh git installs set up an `upstream` and run
`./tools/cache-zulip-git-version` going forward (see subsequent
commit), that does not address the issue for deploys which already
exist. For those, we must configure and fetch a `remote` in the old
checkout, followed by re-generating a cached `zulip-git-version`.
Fixes: #25076.
The existing `except subprocess.CalledProcessError` only catches if
there are syntax errors which prevent the `lastrun` file from being
written; it does not handle if there are properly-defined resources
which fail to evaluate (e.g. due to a missing dependency or file).
Check the `failed` resource count, and exit 2 if there are any such
resources. This will cause `zulip-puppet-apply --force --noop` (which
is used as a pre-flight check during upgrades) to properly detect and
signal on more types of invalid puppet configurations. In turn, this
will cause `upgrade-zulip` to not attempt to power through upgrades it
knows are destined to fail.
Since logrotate runs in a daily cron, this practically means "daily,
but only if it's larger than 500M." For large installs with large
traffic, this is effectively daily for 10 days; for small installs, it
is an unknown amount of time.
Switch to daily logfiles, defaulting to 14 days to match nginx; this
can be overridden using a zulip.conf setting. This makes it easier to
ensure that access logs are only kept for a bounded period of time.
Previously, we had an architecture where CSS inlining for emails was
done at provision time in inline_email_css.py. This was necessary
because the library we were using for this, Premailer, was extremely
slow, and doing the inlining for every outgoing email would have been
prohibitively expensive.
Now that we've migrated to a more modern library that inlines the
small amount of CSS we have into emails nearly instantly, we are able
to remove the complex architecture built to work around Premailer
being slow and just do the CSS inlining as the final step in sending
each individual email.
This has several significant benefits:
* Removes a fiddly provisioning step that made the edit/refresh cycle
for modifying email templates confusing; there's no longer a CSS
inlining step that, if you forget to do it, results in your testing a
stale variant of the email templates.
* Fixes internationalization problems related to translators working
with pre-CSS-inlined emails, and then Django trying to apply the
translators to the post-CSS-inlined version.
* Makes the send_custom_email pipeline simpler and easier to improve.
Signed-off-by: Daniil Fadeev <fadeevd@zulip.com>
While the previous commit handles the common case of all of the server
being started already, it still produces ERROR output lines from
supervisorctl when most of the server is already running. Take the
case where one worker is stopped:
```
$ supervisorctl stop zulip-workers:zulip_events_deferred_work
zulip-workers:zulip_events_deferred_work: stopped
$ ./scripts/start-server
2023-04-04 15:50:28,505 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:50:31,977 start-server: Starting Tornado process on port 9800
zulip-tornado:zulip-tornado-port-9800: ERROR (already started)
2023-04-04 15:50:32,283 start-server: Starting Tornado process on port 9801
zulip-tornado:zulip-tornado-port-9801: ERROR (already started)
2023-04-04 15:50:32,592 start-server: Starting django server
zulip-django: ERROR (already started)
2023-04-04 15:50:33,340 start-server: Starting workers
zulip-workers:zulip_events_deferred_work: started
zulip_deliver_scheduled_emails: ERROR (already started)
zulip_deliver_scheduled_messages: ERROR (already started)
process-fts-updates: ERROR (already started)
2023-04-04 15:50:34,659 start-server: Done!
Zulip started successfully!
```
More gracefully handle these cases:
```
$ ./scripts/start-server
2023-04-04 15:52:39,815 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:52:43,270 start-server: Starting Tornado process on port 9800
2023-04-04 15:52:43,287 start-server: zulip-tornado:zulip-tornado-port-9800 already started!
2023-04-04 15:52:43,287 start-server: Starting Tornado process on port 9801
2023-04-04 15:52:43,300 start-server: zulip-tornado:zulip-tornado-port-9801 already started!
2023-04-04 15:52:43,300 start-server: Starting django server
2023-04-04 15:52:43,316 start-server: zulip-django already started!
2023-04-04 15:52:43,793 start-server: Starting workers
zulip-workers:zulip_events_deferred_work: started
2023-04-04 15:52:45,111 start-server: Done!
Zulip started successfully!
```
Currently, the output from `start-server` if the server is already
running is potentially confusing, since it says ERROR several times:
```
$ ./scripts/start-server
2023-04-04 15:35:12,737 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:35:16,211 start-server: Starting Tornado process on port 9800
zulip-tornado:zulip-tornado-port-9800: ERROR (already started)
2023-04-04 15:35:16,528 start-server: Starting Tornado process on port 9801
zulip-tornado:zulip-tornado-port-9801: ERROR (already started)
2023-04-04 15:35:16,844 start-server: Starting django server
zulip-django: ERROR (already started)
2023-04-04 15:35:17,605 start-server: Starting workers
zulip_deliver_scheduled_emails: ERROR (already started)
zulip_deliver_scheduled_messages: ERROR (already started)
process-fts-updates: ERROR (already started)
2023-04-04 15:35:18,923 start-server: Done!
```
Catch the simple common case where all of the services are already
running, and output a clearer success message:
```
$ ./scripts/start-server
2023-04-04 15:39:52,367 start-server: Running syntax and database checks
System check identified no issues (15 silenced).
2023-04-04 15:39:55,857 start-server: Zulip is already started; nothing to do!
```
Primary goal of library replacement is improving execution speed.
This commit should not affect the functionality of the system
or make any changes to it.
On Docker for Mac with the gRPC FUSE or VirtioFS file sharing
implementations, we nondeterministically get errors like this from
pnpm install:
pnpm: ENOENT: no such file or directory, copyfile '/srv/zulip/.pnpm-store/v3/files/7d/6b44bb658625281b48194e5a3d3a07452bea1f256506dd16f7a21941ef3f0d259e1bcd0cc6202642bf1fd129bc187e6a3921d382d568d312bd83f3023979a0' -> '/srv/zulip/node_modules/.pnpm/regexpu-core@5.3.2/node_modules/_tmp_3227_7f867a9c510832f5f82601784e21e7be/LICENSE-MIT.txt'
Subcommand of ./lib/provision.py failed with exit status 1: /usr/local/bin/pnpm install --frozen-lockfile
Actual error output for the subcommand is just above this.
Work around this using --package-import-method=copy.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Ever since we started bundling the app with webpack, there’s been less
and less overlap between our ‘static’ directory (files belonging to
the frontend app) and Django’s interpretation of the ‘static’
directory (files served directly to the web).
Split the app out to its own ‘web’ directory outside of ‘static’, and
remove all the custom collectstatic --ignore rules. This makes it
much clearer what’s actually being served to the web, and what’s being
bundled by webpack. It also shrinks the release tarball by 3%.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These hooks are run immediately around the critical section of the
upgrade. If the upgrade fails for preparatory reasons, the pre-deploy
hook may not be run; if it fails during the upgrade, the post-deploy
hook will not be run. Hooks are called from the CWD of the new
deploy, with arguments of the old version and the new version. If
they exit with non-0 exit code, the deploy aborts.
Corepack manages multiple per-project version of Yarn and PNPM, which
means we have to maintain less installation code, and could help us
switch away from Yarn 1 without making the system unusable for
development of other Yarn 1 projects.
https://nodejs.org/api/corepack.html
The Unicode spaces in the timerender test resulted from an ICU
upgrade: https://github.com/nodejs/node/pull/45068.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `pg_upgrade` tool uses `pg_dump` as an internal step, and verifies
that the version of `pg_upgrade` is the same exactly the same as the
version of the PostgreSQL server it is upgrading to. A mismatch (even
in packaging versions) leads to it aborting:
```
/usr/lib/postgresql/14/bin/pg_upgrade -b /usr/lib/postgresql/13/bin -B /usr/lib/postgresql/14/bin -p 5432 -P 5435 -d /etc/postgresql/13/main -D /etc/postgresql/14/main --link
Finding the real data directory for the source cluster ok
Finding the real data directory for the target cluster ok
check for "/usr/lib/postgresql/14/bin/pg_dump" failed: incorrect version: found "pg_dump (PostgreSQL) 14.6 (Ubuntu 14.6-0ubuntu0.22.04.1)", expected "pg_dump (PostgreSQL) 14.6 (Ubuntu 14.6-1.pgdg22.04+1)"
Failure, exiting
```
Explicitly upgrade `postgresql-client` at the same time we upgrade
`postgresql` itself, so their versions match.
Fixes: #24192
This was last really used in d7a3570c7e, in 2013, when it was
`/home/humbug/logs`.
Repoint the one obscure piece of tooling that writes there, and remove
the places that created it.
`check_version` in `install-yarn` had the rather careful check that
the yarn it installed into `/usr/bin/yarn` was the yarn which was
first in the user's `$PATH`. This caused problems when the user had a
pre-existing `/usr/local/bin/yarn`; however, those problems are
limited to the `install-yarn` script itself, since the nearly all
calls to yarn from Zulip's code already hardcode the `/srv/zulip-yarn`
location, and do not depend on what is in `$PATH`.
Remove the checks in `install-yarn` that depend on the local `$PATH`,
and stop installing our `yarn` into it. We also adjust the two
callsites which did not specify the full path to `yarn`, so use
`/srv/zulip-yarn`.
Fixes: #23993
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
During installation on a new host, `create-database` attempts to
verify that there isn't a bunch of data already in the database which
is it about to drop and recreate. In the most common case, this
statement emits a scary-looking warning, since the database does not
exist yet:
```
+ /home/zulip/deployments/current/scripts/setup/create-database
+ POSTGRES_USER=postgres
++ crudini --get /etc/zulip/zulip.conf postgresql database_name
++ echo zulip
+ DATABASE_NAME=zulip
++ crudini --get /etc/zulip/zulip.conf postgresql database_user
++ echo zulip
+ DATABASE_USER=zulip
++ cd /
++ su postgres -c 'psql -v ON_ERROR_STOP=1 -Atc '\''SELECT COUNT(*) FROM zulip.zerver_message;'\'' zulip'
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: database "zulip" does not exist
```
Because we are attempting to gracefully handle the case where the
database does not exist yet, we also continue (and drop the database)
in other, less expected cases -- for instance, if database contains a
schema we do not expect.
Explicitly check for the database existence first, and once we verify
that, allow any further failures in the `SELECT COUNT(*)` to abort
`create-database`. This serves the dual purpose of hiding the "FATAL"
error for the common case when the database does not exist, as well as
preventing dropping the database if anything else goes awry.
If a previous attempt at an upgrade failed for some reason, the new
PostgreSQL may be installed, and the conversion will succeed, but the
new PostgreSQL daemon will not be running (Puppet does not force it to
start). This causes the upgrade to fail when analyzing statistics,
since the daemon isn't running.
Explicitly start the new PostgreSQL; this does nothing in most cases,
but will provider better resiliency when recovering from previous
partial upgrades.
Some terminals (e.g. ssh from OS X) set an invalid locale, which
causes the `pg_upgradecluster` call late in the upgrade to fail.
Force a known locale, for consistency. This mirrors the settings in
upgrade-zulip-stage-2, set in 11ab545f3b, and its subsequent
cleanups in 64c608a51a, ee0f4ca330, and eda9ce2364.
‘exit’ is pulled in for the interactive interpreter as a side effect
of the site module; this can be disabled with python -S and shouldn’t
be relied on.
Also, use the NoReturn type where appropriate.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Starting with wal-g 2.0.1, they provide `aarch64` assets[^1].
Effectively revert d7b59c86ce, and use
the pre-built binary for `aarch64` rather than spend a bunch of space
and time having to build it from source.
[^1]: https://github.com/wal-g/wal-g/releases/tag/v2.0.1
If there is a syntax error in `settings.py`, `restart-server` should
provide a reasonable message about this. It did so prior to
af08bcdb3f, becausde any invocation `./manage.py` without
`--skip-checks` will verify `settings.py`, among several other checks.
After af08bcdb3f, there are no `./manage.py` calls in most restarts,
which fa77be6e6c took further.
Add an explicit `./manage.py check` in the default case.
upgrade-zulip-stage-2 overrides this by passing `--skip-checks`, for
performance. This also means that `upgrade-zulip-from-git` itself
picks up the same `--skip-checks` flag, since it inherits the same
flag parsing, though that is perhaps of dubious utility.
Although Node.js 18 is not the active LTS release for another 3 weeks,
the Node.js 16 end-of-life date was moved forward to September 2023,
(https://nodejs.org/en/blog/announcements/nodejs16-eol/), so it seems
prudent to switch now.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “E713 Test for membership should be `not in`” found by ruff (now
that I’ve fixed it not to ignore scripts lacking a .py extension).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “E713 Test for membership should be `not in`” found by
ruff (https://github.com/charliermarsh/ruff).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
One should now be able to configure a regex by appending _regex to the
port number:
[tornado_sharding]
9802_regex = ^[l-p].*\.zulipchat\.com$
Signed-off-by: Anders Kaseorg <anders@zulip.com>
https://nginx.org/en/docs/http/ngx_http_map_module.html
Since Puppet doesn’t manage the contents of nginx_sharding.conf after
its initial creation, it needs to be renamed so we can give it
different default contents.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This implements get_mandatory_secret that ensures SHARED_SECRET is
set when we hit zerver.decorator.authenticate_notify. To avoid getting
ZulipSettingsError when setting up the secrets, we set an environment
variable DISABLE_MANDATORY_SECRET_CHECK to skip the check and default
its value to an empty string.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This has never actually been used -- and does not make sense with the
check-all-queues-at-once model switched to in 88a123d5e0. The
Tornado processes are the only ones we expect to be non-1, and since
they were added in 3f03dcdf5e the right number has been read from
config, not passed as an argument.
`postgresql-14.4` is a notable upgrade in the PostgreSQL series, as it
fixes potential database corruption from `CREATE INDEX CONCURRENTLY`
statements which are run while rows are modified[1]. However, it also
requires an upgrade from `libllvm9` to `libllvm10`, which means it is
not installed by a mere `apt-get upgrade`.
Add the `--with-new-pkgs` flag to all of the potentially relevant
`apt-get upgrade` calls, so that this (and similar) packages are
upgraded successfully.
[1]: https://www.postgresql.org/docs/release/14.4/
30457ecd02 removed the `--mirror` from
initial clones, but did not add back `--bare`, which `--mirror`
implies. This leads to `/srv/zulip.git` having a working tree in it,
with a `/srv/zulip.git/.git` directory.
This is mostly harmless, and since the bug was recent, not worth
introducing additional complexity into the upgrade process to handle.
Calling `git clone --bare`, however, would clone the refs into
`refs/heads/`, not the `refs/remotes/origin/` we want. Instead, use
`git init --bare`, followed by `git remote add origin`. The remote
will be fetched by the usual `git fetch --all --prune` which is below.
While the `remote.origin.mirror` boolean being set is a very good
proxy for having been cloned with `--mirror`, is technically only used
when pushing into the remote[1]. What we care about is if fetches
from this remote will overwrite `refs/heads/`, or all of `refs/` --
the latter of which is most likely, from having run `git clone
--bare`.
Detect either of these fetch refspecs, and not the mirror flag. We
let the upgrade process error out if `remote.origin.fetch` is unset,
as that represents an unexpected state. We ignore failures to unset
the `remote.origin.mirror` flag, in case it is not set already.
[1]: https://git-scm.com/docs/git-config#Documentation/git-config.txt-remoteltnamegtmirror
The local `/srv/zulip.git` directory has been cloned with `--mirror`
since it was first created as a local cache in dc4b89fb08. This
made some sense at the time, since it was purely a cache of the
remote, and not a home to local branches of its own.
That changed in 3f83b843c2, when we began using `git worktree`,
which caused the `deployment-...` branches to begin being stored in
`/src/zulip.git`. This caused intermixing of local and remote
branches.
When 02582c6956 landed, the addition of `--prune` caused all but the
most recent deployment branch to be deleted upon every fetch --
leaving previous deployments with non-existent branches checked out:
```
zulip@example-prod-host:~/deployments/last$ git status
On branch deployment-2022-04-15-23-07-55
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: .browserslistrc
new file: .codecov.yml
new file: .codespellignore
new file: .editorconfig
[...snip list of every file in repo...]
```
Switch `/srv/zulip.git` to no longer be a `--mirror` cache of the
origin. We reconfigure the remote to drop `remote.origin.mirror`, and
delete all refs under `refs/pulls/` and `refs/heads/`, while
preserving any checked-out branches. `refs/pulls/`, if the remote is
the canonical upstream, contains _tens of thousands_ of refs, so
pruning those refs trims off 20% of the repository size.
Those savings require a `git gc --prune=now`, otherwise the dangling
objects are ejected from the packfiles, which would balloon the
repository up to more than three times its previous size. Repacking
the repository is reasonable, in general, after removing such a large
number of refs -- and the `--prune=now` is safe and will not lose
data, as the `--mirror` was good at ensuring that the repository could
not be used for any local state.
The refname in the upgrade process was previously resolved from the
union of local and remote refs, since they were in the same namespace.
We instead now only resolve arguments as tags, then origin branches;
this means that stale local branches will be skipped. Users who want
to deploy from local branches can use `--remote-url=.`.
Because the `scripts/lib/upgrade-zulip-from-git` file is "stage 1" and
run from the old version's code, this will take two invocations of
`upgrade-zulip-from-git` to take effect.
Fixes#21901.
This adds a --skip-restart which makes `deployments/next` in a state
where it can be restarted into, but holds off on conducting that
restart.
This requires many of the same guarantees as `--skip-tornado`, in
terms of there being no Puppet or database schema changes between the
versions. Enforce those with `--skip-restart`, and also broaden both
flags to prevent other, less common changes which nonetheless
potentially might affect the other deploy.