The `analyze_new_cluster.sh` script output by `pg_upgrade` just runs
`vacuumdb --all --analyze-in-stages`, which runs three passes over the
database, getting better stats each time. Each of these passes is
independent; the third pass does not require the first two.
`--analyze-in-stages` is only provided to get "something" into the
database, on the theory that it could then be started and used. Since
we wait for all three passes to complete before starting the database,
the first two passes add no value.
Additionally, PosttgreSQL 14 and up stop writing the
`analyze_new_cluster.sh` script as part of `pg_upgrade`, suggesting
the equivalent `vacuumdb --all --analyze-in-stages` call instead.
Switch to explicitly call `vacuumdb --all --analyze-only`, since we do
not gain any benefit from `--analyze-in-stages`. We also enable
parallelism, with `--jobs 10`, in order to analyze up to 10 tables in
parallel. This may increase load, but will accelerate the upgrade
process.
scripts.lib.node_cache expects Yarn to be in /srv/zulip-yarn, so if
it’s installed somewhere else, even if it’s the right version, we need
to reinstall it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
It recently started failing on Debian 10 (buster). We immediately
follow this by replacing these packages with our own versions from
pip.txt, anyway.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The support for bullseye was added in #17951
but it was not documented as bullseye was
frozen and did not have proper configuration
files, hence wasn't documented.
Since now bullseye is released as a stable
version, it's support can be documented.
The usual output from this command looks like
Notice: Compiled catalog for localhost in environment production in 2.33 seconds
Notice: /Stage[main]/Zulip::Apt_repository/Exec[setup_apt_repo]/returns: current_value 'notrun', should be ['0'] (noop)
Notice: Class[Zulip::Apt_repository]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 1 event
Notice: Applied catalog in 1.20 seconds
which doesn’t seem abnormally alarming, and hiding it makes failures
harder to diagnose.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We previously used `zulip-puppet-apply` with a custom config file,
with an updated PostgreSQL version but more limited set of
`puppet_classes`, to pre-create the basic settings for the new cluster
before running `pg_upgradecluster`.
Unfortunately, the supervisor config uses `purge => true` to remove
all SUPERVISOR configuration files that are not included in the puppet
configuration; this leads to it removing all other supervisor
processes during the upgrade, only to add them back and start them
during the second `zulip-puppet-apply`.
It also leads to `process-fts-updates` not being started after the
upgrade completes; this is the one supervisor config file which was
not removed and re-added, and thus the one that is not re-started due
to having been re-added. This was not detected in CI because CI added
a `start-server` command which was not in the upgrade documentation.
Set a custom facter fact that prevents the `purge` behaviour of the
supervisor configuration. We want to preserve that behaviour in
general, and using `zulip-puppet-apply` continues to be the best way
to pre-set-up the PostgreSQL configuration -- but we wish to avoid
that behaviour when we know we are applying a subset of the puppet
classes.
Since supervisor configs are no longer removed and re-added, this
requires an explicit start-server step in the instructions after the
upgrades complete. This brings the documentation into alignment with
what CI is testing.
These changes are all independent of each other; I just didn’t feel
like making dozens of commits for them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For our marketing emails, we want a width that's more appropriate for
newsletter context, vs. the narrow emails we use for transactional
content.
I haven't figured out a cleaner way to do this than duplicating most
of email_base_default.source.html. But it's not a big deal to
duplicate, since we've been changing that base template only about
once a year.
The script is added to upgrade steps for 20.04 and Buster because
those are the upgrades that cross glibc 2.28, which is most
problematic. It will also be called out in the upgrade notes, to
catch those that have already done that upgrade.
Running `supervisorctl stop` or `supervisorctl restart` on a process
name which is not known is an error:
```
$ supervisorctl stop nonexistent-process
nonexistent-process: ERROR (no such process)
$ echo $?
1
```
ef6d0ec5ca moved
zulip_deliver_scheduled_* out of the `workers:` group. Since upgrades
run `stop-server` before applying puppet, the list of processes at
that time is from the previous version of Zulip, so may not have the
new `zulip_deliver_scheduled_*` names -- and the `stop-server` will
hence fail.
If the upgrade is not applying puppet, it will `restart-server`. At
that point, the old names will still be in the configuration, so
relying on the current `superisorctl status` is the best gauge of what
exists to restart.
In short, only ever stop/start/restart the `zulip_deliver_scheduled_*`
processes if `supervisorctl status` knows about them already.
Nonexistent processes and groups passed to `supervisortctl status` are
printed to STDOUT as follows:
```
$ supervisorctl status zulip-django nonexistent-process nonexistent-group:*
nonexistent-process: ERROR (no such process)
nonexistent-group: ERROR (no such group)
zulip-django RUNNING pid 16043, uptime 17:31:31
```
On supervisor 4 and above, this exits with an exit code of 4;
previously, it returned exit code 0. Ubuntu 18.04 has version 3.3.1,
and Ubuntu 20.04 has version 4.1.0.
Skip any lines with `ERROR (no such ...)`, and accept exit code 4 from
`supervisorctl status`.
Mypy can’t follow absolute imports based on directories other than the
root. This was hiding some type errors due to ignore_missing_imports.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
With two space-separated classes in `puppet_classes`, the second one
is silently ignored. With three of more, puppet generates the
following very opaque error message:
```
Error: Could not parse for environment production: This
Name has no effect. A value was produced and then forgotten (one or
more preceding expressions may have the wrong form)
```
Catch when this has happened, and give an error message to the user.
Fixes#18992.
This command is part of a statsd infrastructure that we stopped
supporting years ago. Its only purpose for some time has been to
provide sample code for how the restart script might trigger a
notification to a graphing system, which doesn't justify maintaining
it.
Fixes part of #18898.
The current `upgrade-zulip` and `upgrade-zulip-from-git`
bash scripts exit with a zero status even if the
upgrade commands exit with a non-zero status.
Hence add `set -e` command which exits the script with
the same status as the non-zero command.
For pipe commands however, the net status of a command
is the status of the last command, hence if the other parts
fail, the net status is only determined by the last command.
This is the case with our main /lib/upgrade-zulip* command
in the scripts whose status is determined by the `tee` command
instead. Hence add a small condition to get the status of the
actual upgrade command and exit the script if it fails with
a non-zero command.
We also check whether the script is being run as root, matching the
install script logic.
This parameter is somewhat useful, and adding this also fixes a
regression where purge-old-deployments would crash since the changes
around c5580607a7 because of
inconsistent supported args lists.
Fixes#16659.
If the server is behind a reverse proxy with http_only=True, the
requests made by email-mirror-postfix need to use http, as https
doesn't work.
Staging and other hosts that are `zulip::app_frontend_base` but not
`zulip::app_frontend_once` do not have a
/etc/supervisor/conf.d/zulip/zulip-once.conf and as such do not have
`zulip_deliver_scheduled_emails` or `zulip_deliver_scheduled_messages`
and thus supervisor will fail to reload.
Making the contents of `zulip-workers` contingent on if the server is
_also_ a `-once` server is complicated, and would involve using Concat
fragments, which severely limit readability.
Instead, expel those two from `zulip-workers`; this is somewhat
reasonable, since they are use an entirely different codepath from
zulip_events_*, using the database rather than RabbitMQ for their
queuing.
We currently run the `clean_unused_caches.py` as a
script to clean the unused caches.
This commit replaces that with
`clean_unused_caches.main` function as it would be
faster.
This commit will allow us to pass the arguments in the
'clean...' functions when calling the `main` function (in
`provision`). It also changes args parsing
function location to `if __name__ == "__main__"` block as
we wouldn't need it to parse args when we call the
function.
We convert the `clean-unused-caches` script to a
python file so we can run it in provision by importing it
instead of running the script, hence saving some time.