zulip/scripts/setup/upgrade-postgresql

75 lines
2.6 KiB
Plaintext
Raw Normal View History

#!/usr/bin/env bash
set -eo pipefail
if [ "$EUID" -ne 0 ]; then
echo "Error: This script must be run as root" >&2
exit 1
fi
UPGRADE_TO=${1:-13}
UPGRADE_FROM=$(crudini --get /etc/zulip/zulip.conf postgresql version)
ZULIP_PATH="$(dirname "$0")/../.."
if [ "$UPGRADE_TO" = "$UPGRADE_FROM" ]; then
echo "Already running PostgreSQL $UPGRADE_TO!"
exit 1
fi
set -x
"$ZULIP_PATH"/scripts/lib/setup-apt-repo
apt-get install -y "postgresql-$UPGRADE_TO"
if pg_lsclusters -h | grep -qE "^$UPGRADE_TO\s+main\b"; then
pg_dropcluster "$UPGRADE_TO" main --stop
fi
(
# Two-stage application of Puppet; we apply the bare-bones
# PostgreSQL configuration first, so that FTS will be configured
# prior to the pg_upgradecluster.
TEMP_CONF_DIR=$(mktemp -d)
cp /etc/zulip/zulip.conf "$TEMP_CONF_DIR"
ZULIP_CONF="${TEMP_CONF_DIR}/zulip.conf"
crudini --set "$ZULIP_CONF" postgresql version "$UPGRADE_TO"
crudini --set "$ZULIP_CONF" machine puppet_classes zulip::profile::base,zulip::postgresql_base
touch "/usr/share/postgresql/$UPGRADE_TO/pgroonga_setup.sql.applied"
upgrade-postgresql: Do not remove other supervisor configs. We previously used `zulip-puppet-apply` with a custom config file, with an updated PostgreSQL version but more limited set of `puppet_classes`, to pre-create the basic settings for the new cluster before running `pg_upgradecluster`. Unfortunately, the supervisor config uses `purge => true` to remove all SUPERVISOR configuration files that are not included in the puppet configuration; this leads to it removing all other supervisor processes during the upgrade, only to add them back and start them during the second `zulip-puppet-apply`. It also leads to `process-fts-updates` not being started after the upgrade completes; this is the one supervisor config file which was not removed and re-added, and thus the one that is not re-started due to having been re-added. This was not detected in CI because CI added a `start-server` command which was not in the upgrade documentation. Set a custom facter fact that prevents the `purge` behaviour of the supervisor configuration. We want to preserve that behaviour in general, and using `zulip-puppet-apply` continues to be the best way to pre-set-up the PostgreSQL configuration -- but we wish to avoid that behaviour when we know we are applying a subset of the puppet classes. Since supervisor configs are no longer removed and re-added, this requires an explicit start-server step in the instructions after the upgrades complete. This brings the documentation into alignment with what CI is testing.
2021-08-18 22:35:44 +02:00
FACTER_LEAVE_SUPERVISOR=true "$ZULIP_PATH"/scripts/zulip-puppet-apply -f --config "$ZULIP_CONF"
rm -rf "$TEMP_CONF_DIR"
)
upgrade: Use the in-place pg_upgrade, not a full dump/restore. pg_upgradecluster has two possibilities for `--method`: `dump`, and `upgrade`. The former is the default, and does a `pg_dump` of all of the databases in the old cluster and feeds them into the new cluster. This is a sure-fire way of getting the same information in both databases, but may be extremely slow on large databases, and is guaranteed to fail on servers whose databases take up >50% of their disk. The `--method=upgrade` method, by contrast, uses pg_upgrade to copy the raw database data file over to the new cluster, and then fiddles with their internal structure as needed by the upgrade to let them be correct for the new version[1]. This is slightly faster than the dump/load method, since it skips the serialization step, but still requires that there be enough space on disk for both old and new versions at once. `pg_upgrade` is currently supported for all versions of PostgreSQL from 8.4 to 12. Using `pg_upgrade` incurs slightly more risk, but since the it is widely used by now, using it in the relatively-controlled Zulip server environment is reasonable. The expected worst failure is failure to upgrade, not corruption or data loss. Additionally passing `--link` uses hardlinks to link the data files into both the old and new directories simultaneously. This resolve both the runtime of the operation, as well as the disk space usage. The only potential downside to this is that as soon as writes have occurred on the upgraded cluster, the old cluster can no longer be started. Since this tooling intends to remove the old cluster immediately after the upgrade completes successfully, this is not a significant drawback. Switch to using `--method=upgrade --link`. This technique spits out two shell scripts which are expected to be run after completion of the upgrade; one re-analyzes the statistics, the other does an `rm -rf` of the data where it is still hardlinked in the old cluster. Extract the location of these scripts from parsing the `pg_upgradecluster` output; since the path is not static, we must rely on it being relatively easy to parse. The risk of the path changing is lower, and has more obvious failure modes, than inserting the current contents of these upgrade steps into the overall `upgrade-postgres`. [1] https://www.postgresql.org/docs/12/pgupgrade.html
2020-07-10 01:24:46 +02:00
# Capture the output so we know where the path to the post-upgrade scripts is
UPGRADE_LOG=$(mktemp "/var/log/zulip/upgrade-postgresql-$UPGRADE_FROM-$UPGRADE_TO.XXXXXXXXX.log")
pg_upgradecluster -v "$UPGRADE_TO" "$UPGRADE_FROM" main --method=upgrade --link | tee "$UPGRADE_LOG"
upgrade: Use the in-place pg_upgrade, not a full dump/restore. pg_upgradecluster has two possibilities for `--method`: `dump`, and `upgrade`. The former is the default, and does a `pg_dump` of all of the databases in the old cluster and feeds them into the new cluster. This is a sure-fire way of getting the same information in both databases, but may be extremely slow on large databases, and is guaranteed to fail on servers whose databases take up >50% of their disk. The `--method=upgrade` method, by contrast, uses pg_upgrade to copy the raw database data file over to the new cluster, and then fiddles with their internal structure as needed by the upgrade to let them be correct for the new version[1]. This is slightly faster than the dump/load method, since it skips the serialization step, but still requires that there be enough space on disk for both old and new versions at once. `pg_upgrade` is currently supported for all versions of PostgreSQL from 8.4 to 12. Using `pg_upgrade` incurs slightly more risk, but since the it is widely used by now, using it in the relatively-controlled Zulip server environment is reasonable. The expected worst failure is failure to upgrade, not corruption or data loss. Additionally passing `--link` uses hardlinks to link the data files into both the old and new directories simultaneously. This resolve both the runtime of the operation, as well as the disk space usage. The only potential downside to this is that as soon as writes have occurred on the upgraded cluster, the old cluster can no longer be started. Since this tooling intends to remove the old cluster immediately after the upgrade completes successfully, this is not a significant drawback. Switch to using `--method=upgrade --link`. This technique spits out two shell scripts which are expected to be run after completion of the upgrade; one re-analyzes the statistics, the other does an `rm -rf` of the data where it is still hardlinked in the old cluster. Extract the location of these scripts from parsing the `pg_upgradecluster` output; since the path is not static, we must rely on it being relatively easy to parse. The risk of the path changing is lower, and has more obvious failure modes, than inserting the current contents of these upgrade steps into the overall `upgrade-postgres`. [1] https://www.postgresql.org/docs/12/pgupgrade.html
2020-07-10 01:24:46 +02:00
SCRIPTS_PATH=$(grep -o "/var/log/postgresql/pg_upgradecluster-$UPGRADE_FROM-$UPGRADE_TO-main.*" "$UPGRADE_LOG" || true)
2020-07-10 01:41:18 +02:00
# If the upgrade completed successfully, lock in the new version in
# our configuration immediately
crudini --set /etc/zulip/zulip.conf postgresql version "$UPGRADE_TO"
upgrade: Use the in-place pg_upgrade, not a full dump/restore. pg_upgradecluster has two possibilities for `--method`: `dump`, and `upgrade`. The former is the default, and does a `pg_dump` of all of the databases in the old cluster and feeds them into the new cluster. This is a sure-fire way of getting the same information in both databases, but may be extremely slow on large databases, and is guaranteed to fail on servers whose databases take up >50% of their disk. The `--method=upgrade` method, by contrast, uses pg_upgrade to copy the raw database data file over to the new cluster, and then fiddles with their internal structure as needed by the upgrade to let them be correct for the new version[1]. This is slightly faster than the dump/load method, since it skips the serialization step, but still requires that there be enough space on disk for both old and new versions at once. `pg_upgrade` is currently supported for all versions of PostgreSQL from 8.4 to 12. Using `pg_upgrade` incurs slightly more risk, but since the it is widely used by now, using it in the relatively-controlled Zulip server environment is reasonable. The expected worst failure is failure to upgrade, not corruption or data loss. Additionally passing `--link` uses hardlinks to link the data files into both the old and new directories simultaneously. This resolve both the runtime of the operation, as well as the disk space usage. The only potential downside to this is that as soon as writes have occurred on the upgraded cluster, the old cluster can no longer be started. Since this tooling intends to remove the old cluster immediately after the upgrade completes successfully, this is not a significant drawback. Switch to using `--method=upgrade --link`. This technique spits out two shell scripts which are expected to be run after completion of the upgrade; one re-analyzes the statistics, the other does an `rm -rf` of the data where it is still hardlinked in the old cluster. Extract the location of these scripts from parsing the `pg_upgradecluster` output; since the path is not static, we must rely on it being relatively easy to parse. The risk of the path changing is lower, and has more obvious failure modes, than inserting the current contents of these upgrade steps into the overall `upgrade-postgres`. [1] https://www.postgresql.org/docs/12/pgupgrade.html
2020-07-10 01:24:46 +02:00
# Update the statistics
[ -n "$SCRIPTS_PATH" ] && su postgres -c "$SCRIPTS_PATH/analyze_new_cluster.sh"
2020-07-10 01:41:18 +02:00
# Start the database up cleanly
"$ZULIP_PATH"/scripts/zulip-puppet-apply -f
2020-07-10 01:41:18 +02:00
# Drop the old data, binaries, and scripts
pg_dropcluster "$UPGRADE_FROM" main
apt remove -y "postgresql-$UPGRADE_FROM"
upgrade: Use the in-place pg_upgrade, not a full dump/restore. pg_upgradecluster has two possibilities for `--method`: `dump`, and `upgrade`. The former is the default, and does a `pg_dump` of all of the databases in the old cluster and feeds them into the new cluster. This is a sure-fire way of getting the same information in both databases, but may be extremely slow on large databases, and is guaranteed to fail on servers whose databases take up >50% of their disk. The `--method=upgrade` method, by contrast, uses pg_upgrade to copy the raw database data file over to the new cluster, and then fiddles with their internal structure as needed by the upgrade to let them be correct for the new version[1]. This is slightly faster than the dump/load method, since it skips the serialization step, but still requires that there be enough space on disk for both old and new versions at once. `pg_upgrade` is currently supported for all versions of PostgreSQL from 8.4 to 12. Using `pg_upgrade` incurs slightly more risk, but since the it is widely used by now, using it in the relatively-controlled Zulip server environment is reasonable. The expected worst failure is failure to upgrade, not corruption or data loss. Additionally passing `--link` uses hardlinks to link the data files into both the old and new directories simultaneously. This resolve both the runtime of the operation, as well as the disk space usage. The only potential downside to this is that as soon as writes have occurred on the upgraded cluster, the old cluster can no longer be started. Since this tooling intends to remove the old cluster immediately after the upgrade completes successfully, this is not a significant drawback. Switch to using `--method=upgrade --link`. This technique spits out two shell scripts which are expected to be run after completion of the upgrade; one re-analyzes the statistics, the other does an `rm -rf` of the data where it is still hardlinked in the old cluster. Extract the location of these scripts from parsing the `pg_upgradecluster` output; since the path is not static, we must rely on it being relatively easy to parse. The risk of the path changing is lower, and has more obvious failure modes, than inserting the current contents of these upgrade steps into the overall `upgrade-postgres`. [1] https://www.postgresql.org/docs/12/pgupgrade.html
2020-07-10 01:24:46 +02:00
if [ -n "$SCRIPTS_PATH" ]; then
su postgres -c "$SCRIPTS_PATH/delete_old_cluster.sh"
rm -rf "$SCRIPTS_PATH"
else
set +x
echo
echo
echo ">>>>> pg_upgradecluster succeeded, but post-upgrade scripts path could not"
echo " be parsed out! Please read the pg_upgradecluster output to understand"
echo " the current status of your cluster:"
echo " $UPGRADE_LOG"
echo " and report this bug with the PostgreSQL $UPGRADE_FROM -> $UPGRADE_TO upgrade to:"
upgrade: Use the in-place pg_upgrade, not a full dump/restore. pg_upgradecluster has two possibilities for `--method`: `dump`, and `upgrade`. The former is the default, and does a `pg_dump` of all of the databases in the old cluster and feeds them into the new cluster. This is a sure-fire way of getting the same information in both databases, but may be extremely slow on large databases, and is guaranteed to fail on servers whose databases take up >50% of their disk. The `--method=upgrade` method, by contrast, uses pg_upgrade to copy the raw database data file over to the new cluster, and then fiddles with their internal structure as needed by the upgrade to let them be correct for the new version[1]. This is slightly faster than the dump/load method, since it skips the serialization step, but still requires that there be enough space on disk for both old and new versions at once. `pg_upgrade` is currently supported for all versions of PostgreSQL from 8.4 to 12. Using `pg_upgrade` incurs slightly more risk, but since the it is widely used by now, using it in the relatively-controlled Zulip server environment is reasonable. The expected worst failure is failure to upgrade, not corruption or data loss. Additionally passing `--link` uses hardlinks to link the data files into both the old and new directories simultaneously. This resolve both the runtime of the operation, as well as the disk space usage. The only potential downside to this is that as soon as writes have occurred on the upgraded cluster, the old cluster can no longer be started. Since this tooling intends to remove the old cluster immediately after the upgrade completes successfully, this is not a significant drawback. Switch to using `--method=upgrade --link`. This technique spits out two shell scripts which are expected to be run after completion of the upgrade; one re-analyzes the statistics, the other does an `rm -rf` of the data where it is still hardlinked in the old cluster. Extract the location of these scripts from parsing the `pg_upgradecluster` output; since the path is not static, we must rely on it being relatively easy to parse. The risk of the path changing is lower, and has more obvious failure modes, than inserting the current contents of these upgrade steps into the overall `upgrade-postgres`. [1] https://www.postgresql.org/docs/12/pgupgrade.html
2020-07-10 01:24:46 +02:00
echo " https://github.com/zulip/zulip/issues"
echo
echo
fi