Fingerprinting the config is somewhat brittle -- it requires either
custom bootstrapping for old (fingerprint-less) configs, and may have
false-positives.
Since generating the config is lightweight, do so into the .tmp files,
and compare the output to the originals to determine if there are
changes to apply.
In order to both surface errors, as well as notify the user in case a
restart is necessary, we must run it twice. The `onlyif`
functionality cannot show configuration errors to the user, only
determine if the command runs or not. We thus run the command once,
judging errors as "interesting" enough to run the actual command,
whose failure will be verbose in Puppet and halt any steps that depend
on it.
Removing the `onlyif` would result in `stage_updated_sharding` showing
up in the output of every Puppet run, which obscures the important
messages it displays when an update to sharding is necessary.
Removing the `command` (e.g. making it an `echo`) would result in
removing the ability to report configuration errors. We thus have no
choice but to run it twice; this is thankfully low-overhead.
The reason higher expected_time_to_clear_backlog were allowed for queues
during "bursts" was, in simpler terms, because those queues to which
this happens, intrinsically have a higher acceptable "time until cleared"
for new events. E.g. digests_email, where it's completely fine to take a
long time to send them out after putting in the queue. And that's
already configurable without a normal/burst distinction.
Thanks to this we can remove a bunch of overly complicated, and
ultimately useless, logic.
The race condition is described in the comment block removed by this
commit. This leaves room for another, remaining race condition
that should be virtually impossible, but nevertheless it seems
worthwhile to have it documented in the code, so we put a new comment
describing it.
As a final note, this is not a new race condition,
it was hypothetically possible with the old code as well.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
`supervisorctl` starts and stops its arguments sequentially, in the
order they are passed[1]. Start them in the opposite order from the
order in which they were stopped -- this puts the dependencies first,
and the most core services (`zulip-django`) last.
While the only "dependency" here is currently thumbor, this sets us up
in case others are added later.
[1] https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py#L782
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
The value in the stats file can get outdated if the queue hasn't done
enough iterations to update the stats file for a while. The queue size
output by rabbitmqctl list_queues is more up to date, and empirically
tends to agree with the value in the stats file (when the stats file is
fresh).
There are three functional side effects:
• Correct an insignificant but mathematically offensive bias toward
repeated characters in generate_api_key introduced in commit
47b4283c4b4c70ecde4d3c8de871c90ee2506d87; its entropy is increased
from 190.52864 bits to 190.53428 bits.
• Use the base32 alphabet in confirmation.models.generate_key; its
entropy is reduced from 124.07820 bits to the documented 120 bits, but
now it uses 1 syscall instead of 24.
• Use the base32 alphabet in get_bigbluebutton_url; its entropy is
reduced from 51.69925 bits to 50 bits, but now it uses 1 syscall
instead of 10.
(The base32 alphabet is A-Z 2-7. We could probably replace all of
these with plain secrets.token_urlsafe, since I expect most callers
can handle the full urlsafe_b64 alphabet A-Z a-z 0-9 - _ without
problems.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
PostgreSQL packages for Ubuntu run "initdb" without specifying locale
on installation. It means that the default template
database (template1) is created by the system default locale. If the
system default locale is non UTF-8 compatible encoding such as
en_US.ISO-8859-15, "zulip" database is also created non UTF-8
compatible encoding such as LATIN9.
You can reproduce this case by running the following script:
apt update
apt install -y locales
locale-gen en_US.ISO-8859-15
update-locale LANG=en_US.ISO-8859-15 LANGUAGE=en_US:
apt install -y wget
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
tar xf zulip-server-latest.tar.gz
zulip-server-*/scripts/setup/install \
--hostname=zulip-test.example.com \
--email=zulip-test-admin@example.com \
--self-signed-cert
scripts/setup/install is failed with the following error:
+ ./manage.py migrate --noinput
Operations to perform:
Apply all migrations: analytics, auth, confirmation, contenttypes, otp_static, otp_totp, sessions, social_django, two_factor, zerver
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying zerver.0001_initial...Traceback (most recent call last):
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 33, in execute
return wrapper_execute(self, super().execute, query, vars)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 20, in wrapper_execute
return action(sql, params)
psycopg2.errors.UntranslatableCharacter: character with byte sequence 0xe2 0x80 0x99 in encoding "UTF8" has no equivalent in encoding "LATIN9"
CONTEXT: line 4 of configuration file "/usr/share/postgresql/12/tsearch_data/en_us.affix"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./manage.py", line 50, in <module>
execute_from_command_line(sys.argv)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 232, in handle
post_migrate_state = executor.migrate(
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 105, in database_forwards
self._run_sql(schema_editor, self.sql)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 130, in _run_sql
schema_editor.execute(statement, params=None)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 137, in execute
cursor.execute(sql, params)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 33, in execute
return wrapper_execute(self, super().execute, query, vars)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 20, in wrapper_execute
return action(sql, params)
django.db.utils.DataError: character with byte sequence 0xe2 0x80 0x99 in encoding "UTF8" has no equivalent in encoding "LATIN9"
CONTEXT: line 4 of configuration file "/usr/share/postgresql/12/tsearch_data/en_us.affix"
This will let PyYAML link against LibYAML when PyYAML is next
installed. Due to virtualenv-clone, that won’t happen until the next
Python package removal anyway, so we don’t bother bumping
PROVISION_VERSION.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The combination of `--force --noop` is potentially confusing, but
currently `--noop` makes no sense without `--force`, as it will prompt
and then not make changes.
Make `--noop` skip the prompt as well.
Fixes#12868.
We now also include python version in the format
'major.minor.patchlevel', when generating hash for a
requirement file. This was necessary since packages tend to
break on different versions of python, so it is important to
track the version on which the venv was setup.
WARN: This commit will force all zulip venvs to be recreated.
We were already using packages names along with their versions
to generate hash for the requirement file, as we were passing
the `.txt` files to the hash_reqs file instead of intended `.in` files
for which the functions in this file was originially designed.
Changed the expand_reqs_helper function to adapt for the `.txt` files.
The contents in the database are unchanged across the PostgreSQL
restart; as such, there is no reason to invalidate the caches.
This step was inherited from the general operating system upgrade
documentation. When Python versions change, such as during OS
upgrades, we must ensure that memcached is cleared. However, the
`do-release-upgrade` process uninstalled and upgraded to a new
memcached, as well as likely restarted the system; a separate step for
OS upgrades to restart memcached is thus unnecessary.
pg_upgradecluster has two possibilities for `--method`: `dump`, and
`upgrade`. The former is the default, and does a `pg_dump` of all of
the databases in the old cluster and feeds them into the new cluster.
This is a sure-fire way of getting the same information in both
databases, but may be extremely slow on large databases, and is
guaranteed to fail on servers whose databases take up >50% of their
disk.
The `--method=upgrade` method, by contrast, uses pg_upgrade to copy
the raw database data file over to the new cluster, and then fiddles
with their internal structure as needed by the upgrade to let them be
correct for the new version[1]. This is slightly faster than the
dump/load method, since it skips the serialization step, but still
requires that there be enough space on disk for both old and new
versions at once. `pg_upgrade` is currently supported for all
versions of PostgreSQL from 8.4 to 12.
Using `pg_upgrade` incurs slightly more risk, but since the it is
widely used by now, using it in the relatively-controlled Zulip server
environment is reasonable. The expected worst failure is failure to
upgrade, not corruption or data loss.
Additionally passing `--link` uses hardlinks to link the data files
into both the old and new directories simultaneously. This resolve
both the runtime of the operation, as well as the disk space usage.
The only potential downside to this is that as soon as writes have
occurred on the upgraded cluster, the old cluster can no longer be
started. Since this tooling intends to remove the old cluster
immediately after the upgrade completes successfully, this is not a
significant drawback.
Switch to using `--method=upgrade --link`. This technique spits out
two shell scripts which are expected to be run after completion of the
upgrade; one re-analyzes the statistics, the other does an `rm -rf` of
the data where it is still hardlinked in the old cluster. Extract the
location of these scripts from parsing the `pg_upgradecluster` output;
since the path is not static, we must rely on it being relatively easy
to parse. The risk of the path changing is lower, and has more
obvious failure modes, than inserting the current contents of these
upgrade steps into the overall `upgrade-postgres`.
[1] https://www.postgresql.org/docs/12/pgupgrade.html
Although mktemp is deprecated due to security issues, this is not a
security issue.
The security problems with mktemp happen when you open the resulting
filename (without O_EXCL) in a publicly writable directory, because
then someone else might have predicted the filename and created or
symlinked or hardlinked something there between the mktemp and the
open, causing you to write to a file you didn’t expect.
Here we don’t open the resulting filename, we symlink to it. symlink
will refuse to clobber an existing file, and we handle the error that
arises from this case. This is the normal way to atomically create a
symlink.
We should still replace mktemp because it’s deprecated, but we can’t
replace it with a function that creates the temporary file. Instead
we build a random filename ourselves.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Running `pg-upgradecluster` runs the `CREATE TEXT SEARCH DICTIONARY`
and `CREATE TEXT SEARCH CONFIGURATION` from
`zerver/migrations/0001_initial.py` on the new PostgreSQL cluster;
this requires that the stopwords file and dictionary exist _prior_
to `pg_upgradecluster` being run.
This causes a minor dependency conflict -- we do not wish to duplicate
the functionality from `zulip::postgres_appdb_base` which configures
those files, but installing all of `zulip::postgres_appdb_tuned` will
attempt to restart PostgreSQL -- which has not configured the cluster
for the new version yet.
In order to split out configuration of the prerequisites for the
application database, and the steps required to run it, we need to be
able to apply only part of the puppet configuration. Use the
newly-added `--config` argument to provide a more limited `zulip.conf`
which only applies `zulip::postgres_appdb_base` to the new version of
Postgres, creating the required tsearch data files.
This also preserves the property that a failure at any point prior to
the `pg_upgradecluster` is easily recoverable, by re-running
`zulip-puppet-apply`.