Fixes#12868.
We now also include python version in the format
'major.minor.patchlevel', when generating hash for a
requirement file. This was necessary since packages tend to
break on different versions of python, so it is important to
track the version on which the venv was setup.
WARN: This commit will force all zulip venvs to be recreated.
We were already using packages names along with their versions
to generate hash for the requirement file, as we were passing
the `.txt` files to the hash_reqs file instead of intended `.in` files
for which the functions in this file was originially designed.
Changed the expand_reqs_helper function to adapt for the `.txt` files.
The contents in the database are unchanged across the PostgreSQL
restart; as such, there is no reason to invalidate the caches.
This step was inherited from the general operating system upgrade
documentation. When Python versions change, such as during OS
upgrades, we must ensure that memcached is cleared. However, the
`do-release-upgrade` process uninstalled and upgraded to a new
memcached, as well as likely restarted the system; a separate step for
OS upgrades to restart memcached is thus unnecessary.
pg_upgradecluster has two possibilities for `--method`: `dump`, and
`upgrade`. The former is the default, and does a `pg_dump` of all of
the databases in the old cluster and feeds them into the new cluster.
This is a sure-fire way of getting the same information in both
databases, but may be extremely slow on large databases, and is
guaranteed to fail on servers whose databases take up >50% of their
disk.
The `--method=upgrade` method, by contrast, uses pg_upgrade to copy
the raw database data file over to the new cluster, and then fiddles
with their internal structure as needed by the upgrade to let them be
correct for the new version[1]. This is slightly faster than the
dump/load method, since it skips the serialization step, but still
requires that there be enough space on disk for both old and new
versions at once. `pg_upgrade` is currently supported for all
versions of PostgreSQL from 8.4 to 12.
Using `pg_upgrade` incurs slightly more risk, but since the it is
widely used by now, using it in the relatively-controlled Zulip server
environment is reasonable. The expected worst failure is failure to
upgrade, not corruption or data loss.
Additionally passing `--link` uses hardlinks to link the data files
into both the old and new directories simultaneously. This resolve
both the runtime of the operation, as well as the disk space usage.
The only potential downside to this is that as soon as writes have
occurred on the upgraded cluster, the old cluster can no longer be
started. Since this tooling intends to remove the old cluster
immediately after the upgrade completes successfully, this is not a
significant drawback.
Switch to using `--method=upgrade --link`. This technique spits out
two shell scripts which are expected to be run after completion of the
upgrade; one re-analyzes the statistics, the other does an `rm -rf` of
the data where it is still hardlinked in the old cluster. Extract the
location of these scripts from parsing the `pg_upgradecluster` output;
since the path is not static, we must rely on it being relatively easy
to parse. The risk of the path changing is lower, and has more
obvious failure modes, than inserting the current contents of these
upgrade steps into the overall `upgrade-postgres`.
[1] https://www.postgresql.org/docs/12/pgupgrade.html
Although mktemp is deprecated due to security issues, this is not a
security issue.
The security problems with mktemp happen when you open the resulting
filename (without O_EXCL) in a publicly writable directory, because
then someone else might have predicted the filename and created or
symlinked or hardlinked something there between the mktemp and the
open, causing you to write to a file you didn’t expect.
Here we don’t open the resulting filename, we symlink to it. symlink
will refuse to clobber an existing file, and we handle the error that
arises from this case. This is the normal way to atomically create a
symlink.
We should still replace mktemp because it’s deprecated, but we can’t
replace it with a function that creates the temporary file. Instead
we build a random filename ourselves.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Running `pg-upgradecluster` runs the `CREATE TEXT SEARCH DICTIONARY`
and `CREATE TEXT SEARCH CONFIGURATION` from
`zerver/migrations/0001_initial.py` on the new PostgreSQL cluster;
this requires that the stopwords file and dictionary exist _prior_
to `pg_upgradecluster` being run.
This causes a minor dependency conflict -- we do not wish to duplicate
the functionality from `zulip::postgres_appdb_base` which configures
those files, but installing all of `zulip::postgres_appdb_tuned` will
attempt to restart PostgreSQL -- which has not configured the cluster
for the new version yet.
In order to split out configuration of the prerequisites for the
application database, and the steps required to run it, we need to be
able to apply only part of the puppet configuration. Use the
newly-added `--config` argument to provide a more limited `zulip.conf`
which only applies `zulip::postgres_appdb_base` to the new version of
Postgres, creating the required tsearch data files.
This also preserves the property that a failure at any point prior to
the `pg_upgradecluster` is easily recoverable, by re-running
`zulip-puppet-apply`.
Otherwise, the useradd command will fail during the DigitalOcean
1-Click App installation because the install script is called
twice during the whole process. Plus the Zulip install script
is designed to be idempotent and this bug compromises that.
The value is a holdover from when it controlled runtime behavior,
which it no longer does.
Stop taking a DEPLOYMENT_TYPE, which is unused; the python code only
care about if the option exists, not its value.
These are more correct to the sense of "is this a service we
configured for Zulip", and removes potential confusion around the 0/1
values being backwards from how binary is usually interpreted.
Using checks of `,$PUPPET_CLASSES,` is repetitive and error-prone; it
does not properly deal with `zulip_ops::` classes, for instance, which
include the `zulip::` classes.
As alluded to in ca9d27175b, this can be fixed by inspecting the
classes that would be applied, using `puppet --write-catalog-summary`.
We work around the chicken-and-egg problem alluded to therein by
writing out as complete `zulip.conf` as would be necessary, before
running puppet and removing the sections we then know to not be
needed.
Unfortunately, there are two checks for `$PUPPET_CLASSES` which cannot
be switched to this technique, as they concern errors that we wish to
catch quite early, and thus before we have puppet installed. Since we
expect failures of those to only concern warnings, and only be
mistakenly omitted for internal `zulip_ops::` classes, this seems a
reasonable risk to admit in exchange for catching common errors early.
When supervisor is first installed, it is started automatically, and
creates the socket, owned by root. Subsequent reconfiguration in
puppet only calls `reread + update`, which is insufficient to apply
the `chown = zulip:zulip` line in `supervisord.conf`, leaving the
socket owned by `root` and the last part of the installation unable to
restart `supervisor` services as the `zulip` user. The `chown` line
in `scripts/lib/install` exists to paper over this.
Add a separate exec target for changes to `supervisord.conf` itself,
which restarts the full service. This leaves the default `restart`
action on the service for the lightweight `reread + update` action,
which is more common.
We use `systemctl` only on redhat-esque builds, because CI runs
Ubuntu, but init is not systemd in that context. `systemctl reload`
is sufficient to re-apply the socket ownership, but a full `restart`
and not `reload` is necessary under `/etc/init.d/supervisor`.
This is based on the existing steps in the documentation, with
additional changes now that the PostgreSQL version is stored in
`/etc/zulip/zulip.conf`.
49a7a66004 and immediately previous commits began installing
PostgreSQL 12 from their apt repository. On machines which already
have the distribution-provided version of PostgreSQL installed,
however, this leads to failure to apply puppet when restarting
PostgreSQL 12, as both attempt to claim the same port.
During installation, if we will be installing PostgreSQL, look for
other versions than what we will install, and abort if they are
found. This is safer than attempting to automatically uninstall or
reconfigure existing databases.
This allows for installing from-scratch with a different pinned
version of PostgreSQL, and provides a single place to change when the
default should increase.
Using `/etc/init.d/postgresql` as the detection of if Postgres is on
the server is incorrect, because this line runs _before_ puppet and
any packages are installed. Thus, it cannot tell the difference
between a new Ubuntu one-host first-time-install without PostgreSQL
yet, and one which is merely a front-end and will never have
PostgreSQL. This leads to failures in first-time installs:
```
Error: Evaluation Error: Error while evaluating a Function Call,
Could not find template 'zulip/postgresql//postgresql.conf.template.erb'
```
The only way to detect if PostgreSQL will be present in the _end_
state of the install is to examine the puppet classes that are
applied.
To do this, we must inspect `PUPPET_CLASSES`. Unfortunately, this can
be fragile to subclassing (e.g. `zulip_ops::postgres_appdb`). We
might desire to use `puppet apply --write-catalog-summary` to deduce
the _applied_ classes, which would unroll the inheritance; however,
this causes a chicken-and-egg problem, because `zulip.conf` must be
already written out (including a value for `postgresql.version`, if
necessary!) before such a puppet run could successfully complete.
Switch to predicating the `postgresql.version` key on the puppet
classes that are known to install postgres.
Support for Xenial and Stretch was removed (5154ddafca, 0f4b1076ad,
8944e0ad53, 79acd5ae40, 1219a2e854), but not all codepaths were
updated to remove their conditionals on it.
Remove all code predicated on Xenial or Stretch. debathena support
was migrated to Bionic, since that appears to be the current state of
existing debathena servers.
0f4b1076ad removed Ubuntu 16.04 "xenial" and Debian 9 "stretch" from
the printed list of supported operating systems, but left them in the
verification check that controls if that message is printed,
effectively continuing to support them.
Conversely, 439f0d3004 added Ubuntu 20.04 "focal" to the check, but
not to the printed list.
Synchronize to check and print the right supported distributions:
Ubuntu 18.04 "bionic", Ubuntu 20.04 "focal", and Debian 10 "buster".
The previous commit removed the only behavior difference between the
two flags; both of them skip user/database creation, and the tables
therein.
Of the two options `--no-init-db` is more explicit as to what it does,
as opposed to just one facet of when it might be used; remove
`--remote-postgres`.
Since `--postgres-missing-dictionaries` edits `/etc/zulip/zulip.conf`,
it interferes with the intent of `--no-overwrite-settings`.
Make the two settings conflict, to prevent this unclear state.
The `--no-init-db` option previously only controlled if
`initialize-database` was run, which sets up the tables inside the
database. If PostgreSQL was installed locally, it still attempted to
create the user and empty database.
This fails on hosts which are remote PostgreSQL hosts, and not
application hosts, as:
- They may already have a local database, and while
`initialize-datbase` will detect and offer to abort if one is
found,`--no-init-db` seems like it should be the option to not
overwrite it
- `flush-memcached` requires that a local venv be installed, which it
often is not on non-frontend machines.
Skip the database configuration when run with `--no-init-db`.
Since we now support Postgres versions from 10 to 12, we might as well
have new installations start on Postgres 12 to avoid unnecessary
migration/upgrade work.