This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
The value in the stats file can get outdated if the queue hasn't done
enough iterations to update the stats file for a while. The queue size
output by rabbitmqctl list_queues is more up to date, and empirically
tends to agree with the value in the stats file (when the stats file is
fresh).
This will let PyYAML link against LibYAML when PyYAML is next
installed. Due to virtualenv-clone, that won’t happen until the next
Python package removal anyway, so we don’t bother bumping
PROVISION_VERSION.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The combination of `--force --noop` is potentially confusing, but
currently `--noop` makes no sense without `--force`, as it will prompt
and then not make changes.
Make `--noop` skip the prompt as well.
Fixes#12868.
We now also include python version in the format
'major.minor.patchlevel', when generating hash for a
requirement file. This was necessary since packages tend to
break on different versions of python, so it is important to
track the version on which the venv was setup.
WARN: This commit will force all zulip venvs to be recreated.
We were already using packages names along with their versions
to generate hash for the requirement file, as we were passing
the `.txt` files to the hash_reqs file instead of intended `.in` files
for which the functions in this file was originially designed.
Changed the expand_reqs_helper function to adapt for the `.txt` files.
Although mktemp is deprecated due to security issues, this is not a
security issue.
The security problems with mktemp happen when you open the resulting
filename (without O_EXCL) in a publicly writable directory, because
then someone else might have predicted the filename and created or
symlinked or hardlinked something there between the mktemp and the
open, causing you to write to a file you didn’t expect.
Here we don’t open the resulting filename, we symlink to it. symlink
will refuse to clobber an existing file, and we handle the error that
arises from this case. This is the normal way to atomically create a
symlink.
We should still replace mktemp because it’s deprecated, but we can’t
replace it with a function that creates the temporary file. Instead
we build a random filename ourselves.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Otherwise, the useradd command will fail during the DigitalOcean
1-Click App installation because the install script is called
twice during the whole process. Plus the Zulip install script
is designed to be idempotent and this bug compromises that.
The value is a holdover from when it controlled runtime behavior,
which it no longer does.
Stop taking a DEPLOYMENT_TYPE, which is unused; the python code only
care about if the option exists, not its value.
These are more correct to the sense of "is this a service we
configured for Zulip", and removes potential confusion around the 0/1
values being backwards from how binary is usually interpreted.
Using checks of `,$PUPPET_CLASSES,` is repetitive and error-prone; it
does not properly deal with `zulip_ops::` classes, for instance, which
include the `zulip::` classes.
As alluded to in ca9d27175b, this can be fixed by inspecting the
classes that would be applied, using `puppet --write-catalog-summary`.
We work around the chicken-and-egg problem alluded to therein by
writing out as complete `zulip.conf` as would be necessary, before
running puppet and removing the sections we then know to not be
needed.
Unfortunately, there are two checks for `$PUPPET_CLASSES` which cannot
be switched to this technique, as they concern errors that we wish to
catch quite early, and thus before we have puppet installed. Since we
expect failures of those to only concern warnings, and only be
mistakenly omitted for internal `zulip_ops::` classes, this seems a
reasonable risk to admit in exchange for catching common errors early.
When supervisor is first installed, it is started automatically, and
creates the socket, owned by root. Subsequent reconfiguration in
puppet only calls `reread + update`, which is insufficient to apply
the `chown = zulip:zulip` line in `supervisord.conf`, leaving the
socket owned by `root` and the last part of the installation unable to
restart `supervisor` services as the `zulip` user. The `chown` line
in `scripts/lib/install` exists to paper over this.
Add a separate exec target for changes to `supervisord.conf` itself,
which restarts the full service. This leaves the default `restart`
action on the service for the lightweight `reread + update` action,
which is more common.
We use `systemctl` only on redhat-esque builds, because CI runs
Ubuntu, but init is not systemd in that context. `systemctl reload`
is sufficient to re-apply the socket ownership, but a full `restart`
and not `reload` is necessary under `/etc/init.d/supervisor`.
49a7a66004 and immediately previous commits began installing
PostgreSQL 12 from their apt repository. On machines which already
have the distribution-provided version of PostgreSQL installed,
however, this leads to failure to apply puppet when restarting
PostgreSQL 12, as both attempt to claim the same port.
During installation, if we will be installing PostgreSQL, look for
other versions than what we will install, and abort if they are
found. This is safer than attempting to automatically uninstall or
reconfigure existing databases.
This allows for installing from-scratch with a different pinned
version of PostgreSQL, and provides a single place to change when the
default should increase.
Using `/etc/init.d/postgresql` as the detection of if Postgres is on
the server is incorrect, because this line runs _before_ puppet and
any packages are installed. Thus, it cannot tell the difference
between a new Ubuntu one-host first-time-install without PostgreSQL
yet, and one which is merely a front-end and will never have
PostgreSQL. This leads to failures in first-time installs:
```
Error: Evaluation Error: Error while evaluating a Function Call,
Could not find template 'zulip/postgresql//postgresql.conf.template.erb'
```
The only way to detect if PostgreSQL will be present in the _end_
state of the install is to examine the puppet classes that are
applied.
To do this, we must inspect `PUPPET_CLASSES`. Unfortunately, this can
be fragile to subclassing (e.g. `zulip_ops::postgres_appdb`). We
might desire to use `puppet apply --write-catalog-summary` to deduce
the _applied_ classes, which would unroll the inheritance; however,
this causes a chicken-and-egg problem, because `zulip.conf` must be
already written out (including a value for `postgresql.version`, if
necessary!) before such a puppet run could successfully complete.
Switch to predicating the `postgresql.version` key on the puppet
classes that are known to install postgres.
Support for Xenial and Stretch was removed (5154ddafca, 0f4b1076ad,
8944e0ad53, 79acd5ae40, 1219a2e854), but not all codepaths were
updated to remove their conditionals on it.
Remove all code predicated on Xenial or Stretch. debathena support
was migrated to Bionic, since that appears to be the current state of
existing debathena servers.
0f4b1076ad removed Ubuntu 16.04 "xenial" and Debian 9 "stretch" from
the printed list of supported operating systems, but left them in the
verification check that controls if that message is printed,
effectively continuing to support them.
Conversely, 439f0d3004 added Ubuntu 20.04 "focal" to the check, but
not to the printed list.
Synchronize to check and print the right supported distributions:
Ubuntu 18.04 "bionic", Ubuntu 20.04 "focal", and Debian 10 "buster".
The previous commit removed the only behavior difference between the
two flags; both of them skip user/database creation, and the tables
therein.
Of the two options `--no-init-db` is more explicit as to what it does,
as opposed to just one facet of when it might be used; remove
`--remote-postgres`.
Since `--postgres-missing-dictionaries` edits `/etc/zulip/zulip.conf`,
it interferes with the intent of `--no-overwrite-settings`.
Make the two settings conflict, to prevent this unclear state.
The `--no-init-db` option previously only controlled if
`initialize-database` was run, which sets up the tables inside the
database. If PostgreSQL was installed locally, it still attempted to
create the user and empty database.
This fails on hosts which are remote PostgreSQL hosts, and not
application hosts, as:
- They may already have a local database, and while
`initialize-datbase` will detect and offer to abort if one is
found,`--no-init-db` seems like it should be the option to not
overwrite it
- `flush-memcached` requires that a local venv be installed, which it
often is not on non-frontend machines.
Skip the database configuration when run with `--no-init-db`.
Since we now support Postgres versions from 10 to 12, we might as well
have new installations start on Postgres 12 to avoid unnecessary
migration/upgrade work.
We would prefer to use the postgres packages from Postgres themselves,
if available. However, this requires ensures that, for existing
installs, we preserve the same version of postgres as their base
distribution installed.
Move the version-determination logic from being computed at puppet
interpolation time, to being computed at install time and pinned into
zulip.conf.
These files can’t use f-strings yet because they need to run in Python
2 or Python 3.5.
Generated by pyupgrade.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Generated by pyupgrade --py36-plus --keep-percent-format.
Now including %d, %i, %u, and multi-line strings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
certbot-auto doesn’t work on Ubuntu 20.04, and won’t be updated; we
migrate to instead using the certbot package shipped with the OS
instead. Also made sure that sure certbot gets installed when running
zulip-puppet-apply, to handle existing systems.
We already override the umask in upgrade-zulip-stage-2, but that’s too
late since we’ve already written a bunch of files in stage 1. I would
have removed the stage 2 override, but the OS upgrade documentation
references running stage 2 directly.
Fixes#15164. Note that an affected installation will need to upgrade
twice, because the first upgrade uses the old stage 1.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This allows straight-forward configuration of realm-based Tornado
sharding through simply editing /etc/zulip/zulip.conf to configure
shards and running scripts/refresh-sharding-and-restart.
Co-Author-By: Mateusz Mandera <mateusz.mandera@zulip.com>
While this functionality to post slow queries to a Zulip stream was
very useful in the early days of Zulip, when there were only a few
hundred accounts, it's long since been useless since (1) the total
request volume on larger Zulip servers run by Zulip developers, and
(2) other server operators don't want real-time notifications of slow
backend queries. The right structure for this is just a log file.
We get rid of the queue and replace it with a "zulip.slow_queries"
logger, which will still log to /var/log/zulip/slow_queries.log for
ease of access to this information and propagate to the other logging
handlers. Reducing the amount of queues is good for lowering zulip's
memory footprint and restart performance, since we run at least one
dedicated queue worker process for each one in most configurations.
Yes, it's slightly janky to create an
argparse.Namespace object like this, but it
saves us from shelling out to a script whose
only real value-add is parsing a single
`threshold_days` argument.
This saves about 130ms for a no-op provision.
Since in travis we don't have root access so we used to add different
srv path. As now we shifted our production suites to Circle CI
we don't need that code so removed it.
Also we used a hacky code in commit-lint-message for travis which is
now of no use.
Now that we've cleaned up this tool's output, there's no reason to use
an awkward mechanism to hide its output; we can just print it out like
a normal program.
Fixes#14644; resolves#14701.
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Since now we want to use production suites on Circle CI so there
is no need to set TRAVIS in env while running scripts.
CIRCLECI is set default in the enviroment of Circle CI builds
so we can use it directly.
Also Travis CI had rabbitmq-server installed so we had to add workaround
in install script to avoid the error. That workaround is removed.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process