The class names need to be renamed even if we are not about to run
puppet ourselves; otherwise, deployments which rely on running puppet
themselves will still have the wrong class names.
These are respected by `urllib`, and thus also `requests`. We set
`HTTP_proxy`, not `HTTP_PROXY`, because the latter is ignored in
situations which might be running under CGI -- in such cases it may be
coming from the `Proxy:` header in the request.
Using `config_file.write()` only writes out what python stored of the
file; as such, it strips all comments and whitespace.
Use `crudini --set`, which only modifies the line whose contents are
changed.
There is only one PostgreSQL database; the "appdb" is irrelevant.
Also use "postgresql," as it is the name of the software, whereas
"postgres" the name of the binary and colloquial name. This is minor
cleanup, but enabled by the other renames in the previous commit.
The "voyager" name is non-intuitive and not significant.
`zulip::voyager` and `zulip::dockervoyager` stubs are kept for
back-compatibility with existing `zulip.conf` files.
This moves the puppet configuration closer to the "roles and profiles
method"[1] which is suggested for organizing puppet classes. Notably,
here it makes clear which classes are meant to be able to stand alone
as deployments.
Shims are left behind at the previous names, for compatibility with
existing `zulip.conf` files when upgrading.
[1] https://puppet.com/docs/pe/2019.8/the_roles_and_profiles_method
Fingerprinting the config is somewhat brittle -- it requires either
custom bootstrapping for old (fingerprint-less) configs, and may have
false-positives.
Since generating the config is lightweight, do so into the .tmp files,
and compare the output to the originals to determine if there are
changes to apply.
In order to both surface errors, as well as notify the user in case a
restart is necessary, we must run it twice. The `onlyif`
functionality cannot show configuration errors to the user, only
determine if the command runs or not. We thus run the command once,
judging errors as "interesting" enough to run the actual command,
whose failure will be verbose in Puppet and halt any steps that depend
on it.
Removing the `onlyif` would result in `stage_updated_sharding` showing
up in the output of every Puppet run, which obscures the important
messages it displays when an update to sharding is necessary.
Removing the `command` (e.g. making it an `echo`) would result in
removing the ability to report configuration errors. We thus have no
choice but to run it twice; this is thankfully low-overhead.
The reason higher expected_time_to_clear_backlog were allowed for queues
during "bursts" was, in simpler terms, because those queues to which
this happens, intrinsically have a higher acceptable "time until cleared"
for new events. E.g. digests_email, where it's completely fine to take a
long time to send them out after putting in the queue. And that's
already configurable without a normal/burst distinction.
Thanks to this we can remove a bunch of overly complicated, and
ultimately useless, logic.
The race condition is described in the comment block removed by this
commit. This leaves room for another, remaining race condition
that should be virtually impossible, but nevertheless it seems
worthwhile to have it documented in the code, so we put a new comment
describing it.
As a final note, this is not a new race condition,
it was hypothetically possible with the old code as well.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
The value in the stats file can get outdated if the queue hasn't done
enough iterations to update the stats file for a while. The queue size
output by rabbitmqctl list_queues is more up to date, and empirically
tends to agree with the value in the stats file (when the stats file is
fresh).
This will let PyYAML link against LibYAML when PyYAML is next
installed. Due to virtualenv-clone, that won’t happen until the next
Python package removal anyway, so we don’t bother bumping
PROVISION_VERSION.
Signed-off-by: Anders Kaseorg <anders@zulip.com>