This moves the puppet configuration closer to the "roles and profiles
method"[1] which is suggested for organizing puppet classes. Notably,
here it makes clear which classes are meant to be able to stand alone
as deployments.
Shims are left behind at the previous names, for compatibility with
existing `zulip.conf` files when upgrading.
[1] https://puppet.com/docs/pe/2019.8/the_roles_and_profiles_method
Because the command is part of a pipe sequence, the exitcode defaults
to the last in the sequence, which is not the most important one here.
Set pipefail, which sets the exit status to the exit code of the last
program in the sequence to exit non-zero, or 0 if all succeeded. This
prevents the upgrade from barreling onward and setting
`postgres.version` improperly if the database upgrade step failed.
Fingerprinting the config is somewhat brittle -- it requires either
custom bootstrapping for old (fingerprint-less) configs, and may have
false-positives.
Since generating the config is lightweight, do so into the .tmp files,
and compare the output to the originals to determine if there are
changes to apply.
In order to both surface errors, as well as notify the user in case a
restart is necessary, we must run it twice. The `onlyif`
functionality cannot show configuration errors to the user, only
determine if the command runs or not. We thus run the command once,
judging errors as "interesting" enough to run the actual command,
whose failure will be verbose in Puppet and halt any steps that depend
on it.
Removing the `onlyif` would result in `stage_updated_sharding` showing
up in the output of every Puppet run, which obscures the important
messages it displays when an update to sharding is necessary.
Removing the `command` (e.g. making it an `echo`) would result in
removing the ability to report configuration errors. We thus have no
choice but to run it twice; this is thankfully low-overhead.
The reason higher expected_time_to_clear_backlog were allowed for queues
during "bursts" was, in simpler terms, because those queues to which
this happens, intrinsically have a higher acceptable "time until cleared"
for new events. E.g. digests_email, where it's completely fine to take a
long time to send them out after putting in the queue. And that's
already configurable without a normal/burst distinction.
Thanks to this we can remove a bunch of overly complicated, and
ultimately useless, logic.
The race condition is described in the comment block removed by this
commit. This leaves room for another, remaining race condition
that should be virtually impossible, but nevertheless it seems
worthwhile to have it documented in the code, so we put a new comment
describing it.
As a final note, this is not a new race condition,
it was hypothetically possible with the old code as well.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
Making this include "zulip-tornado" makes it clearer in supervisor
logs. Without this, one only sees:
```
2020-09-14 03:43:13,788 INFO waiting for port-9807 to stop
2020-09-14 03:43:14,466 INFO stopped: port-9807 (exit status 1)
2020-09-14 03:43:14,469 INFO spawned: 'port-9807' with pid 24289
2020-09-14 03:43:15,470 INFO success: port-9807 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
```
`supervisorctl` starts and stops its arguments sequentially, in the
order they are passed[1]. Start them in the opposite order from the
order in which they were stopped -- this puts the dependencies first,
and the most core services (`zulip-django`) last.
While the only "dependency" here is currently thumbor, this sets us up
in case others are added later.
[1] https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py#L782
This supports running puppet to pick up new sharding changes, which
will warn of the need to finalize them via
`refresh-sharding-and-restart`, or simply running that directly.
The value in the stats file can get outdated if the queue hasn't done
enough iterations to update the stats file for a while. The queue size
output by rabbitmqctl list_queues is more up to date, and empirically
tends to agree with the value in the stats file (when the stats file is
fresh).
There are three functional side effects:
• Correct an insignificant but mathematically offensive bias toward
repeated characters in generate_api_key introduced in commit
47b4283c4b4c70ecde4d3c8de871c90ee2506d87; its entropy is increased
from 190.52864 bits to 190.53428 bits.
• Use the base32 alphabet in confirmation.models.generate_key; its
entropy is reduced from 124.07820 bits to the documented 120 bits, but
now it uses 1 syscall instead of 24.
• Use the base32 alphabet in get_bigbluebutton_url; its entropy is
reduced from 51.69925 bits to 50 bits, but now it uses 1 syscall
instead of 10.
(The base32 alphabet is A-Z 2-7. We could probably replace all of
these with plain secrets.token_urlsafe, since I expect most callers
can handle the full urlsafe_b64 alphabet A-Z a-z 0-9 - _ without
problems.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
PostgreSQL packages for Ubuntu run "initdb" without specifying locale
on installation. It means that the default template
database (template1) is created by the system default locale. If the
system default locale is non UTF-8 compatible encoding such as
en_US.ISO-8859-15, "zulip" database is also created non UTF-8
compatible encoding such as LATIN9.
You can reproduce this case by running the following script:
apt update
apt install -y locales
locale-gen en_US.ISO-8859-15
update-locale LANG=en_US.ISO-8859-15 LANGUAGE=en_US:
apt install -y wget
wget https://www.zulip.org/dist/releases/zulip-server-latest.tar.gz
tar xf zulip-server-latest.tar.gz
zulip-server-*/scripts/setup/install \
--hostname=zulip-test.example.com \
--email=zulip-test-admin@example.com \
--self-signed-cert
scripts/setup/install is failed with the following error:
+ ./manage.py migrate --noinput
Operations to perform:
Apply all migrations: analytics, auth, confirmation, contenttypes, otp_static, otp_totp, sessions, social_django, two_factor, zerver
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying zerver.0001_initial...Traceback (most recent call last):
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 33, in execute
return wrapper_execute(self, super().execute, query, vars)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 20, in wrapper_execute
return action(sql, params)
psycopg2.errors.UntranslatableCharacter: character with byte sequence 0xe2 0x80 0x99 in encoding "UTF8" has no equivalent in encoding "LATIN9"
CONTEXT: line 4 of configuration file "/usr/share/postgresql/12/tsearch_data/en_us.affix"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./manage.py", line 50, in <module>
execute_from_command_line(sys.argv)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/base.py", line 83, in wrapped
res = handle_func(*args, **kwargs)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 232, in handle
post_migrate_state = executor.migrate(
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/executor.py", line 245, in apply_migration
state = migration.apply(state, schema_editor)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 105, in database_forwards
self._run_sql(schema_editor, self.sql)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 130, in _run_sql
schema_editor.execute(statement, params=None)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 137, in execute
cursor.execute(sql, params)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/srv/zulip-venv-cache/b4a27188142d80b2eeb64f5d5c05b1d94cc6b7b9/zulip-py3-venv/lib/python3.8/site-packages/django/db/backends/utils.py", line 82, in _execute
return self.cursor.execute(sql)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 33, in execute
return wrapper_execute(self, super().execute, query, vars)
File "/home/zulip/deployments/2020-08-19-05-57-10/zerver/lib/db.py", line 20, in wrapper_execute
return action(sql, params)
django.db.utils.DataError: character with byte sequence 0xe2 0x80 0x99 in encoding "UTF8" has no equivalent in encoding "LATIN9"
CONTEXT: line 4 of configuration file "/usr/share/postgresql/12/tsearch_data/en_us.affix"
This will let PyYAML link against LibYAML when PyYAML is next
installed. Due to virtualenv-clone, that won’t happen until the next
Python package removal anyway, so we don’t bother bumping
PROVISION_VERSION.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The combination of `--force --noop` is potentially confusing, but
currently `--noop` makes no sense without `--force`, as it will prompt
and then not make changes.
Make `--noop` skip the prompt as well.
Fixes#12868.
We now also include python version in the format
'major.minor.patchlevel', when generating hash for a
requirement file. This was necessary since packages tend to
break on different versions of python, so it is important to
track the version on which the venv was setup.
WARN: This commit will force all zulip venvs to be recreated.
We were already using packages names along with their versions
to generate hash for the requirement file, as we were passing
the `.txt` files to the hash_reqs file instead of intended `.in` files
for which the functions in this file was originially designed.
Changed the expand_reqs_helper function to adapt for the `.txt` files.
The contents in the database are unchanged across the PostgreSQL
restart; as such, there is no reason to invalidate the caches.
This step was inherited from the general operating system upgrade
documentation. When Python versions change, such as during OS
upgrades, we must ensure that memcached is cleared. However, the
`do-release-upgrade` process uninstalled and upgraded to a new
memcached, as well as likely restarted the system; a separate step for
OS upgrades to restart memcached is thus unnecessary.
pg_upgradecluster has two possibilities for `--method`: `dump`, and
`upgrade`. The former is the default, and does a `pg_dump` of all of
the databases in the old cluster and feeds them into the new cluster.
This is a sure-fire way of getting the same information in both
databases, but may be extremely slow on large databases, and is
guaranteed to fail on servers whose databases take up >50% of their
disk.
The `--method=upgrade` method, by contrast, uses pg_upgrade to copy
the raw database data file over to the new cluster, and then fiddles
with their internal structure as needed by the upgrade to let them be
correct for the new version[1]. This is slightly faster than the
dump/load method, since it skips the serialization step, but still
requires that there be enough space on disk for both old and new
versions at once. `pg_upgrade` is currently supported for all
versions of PostgreSQL from 8.4 to 12.
Using `pg_upgrade` incurs slightly more risk, but since the it is
widely used by now, using it in the relatively-controlled Zulip server
environment is reasonable. The expected worst failure is failure to
upgrade, not corruption or data loss.
Additionally passing `--link` uses hardlinks to link the data files
into both the old and new directories simultaneously. This resolve
both the runtime of the operation, as well as the disk space usage.
The only potential downside to this is that as soon as writes have
occurred on the upgraded cluster, the old cluster can no longer be
started. Since this tooling intends to remove the old cluster
immediately after the upgrade completes successfully, this is not a
significant drawback.
Switch to using `--method=upgrade --link`. This technique spits out
two shell scripts which are expected to be run after completion of the
upgrade; one re-analyzes the statistics, the other does an `rm -rf` of
the data where it is still hardlinked in the old cluster. Extract the
location of these scripts from parsing the `pg_upgradecluster` output;
since the path is not static, we must rely on it being relatively easy
to parse. The risk of the path changing is lower, and has more
obvious failure modes, than inserting the current contents of these
upgrade steps into the overall `upgrade-postgres`.
[1] https://www.postgresql.org/docs/12/pgupgrade.html
Although mktemp is deprecated due to security issues, this is not a
security issue.
The security problems with mktemp happen when you open the resulting
filename (without O_EXCL) in a publicly writable directory, because
then someone else might have predicted the filename and created or
symlinked or hardlinked something there between the mktemp and the
open, causing you to write to a file you didn’t expect.
Here we don’t open the resulting filename, we symlink to it. symlink
will refuse to clobber an existing file, and we handle the error that
arises from this case. This is the normal way to atomically create a
symlink.
We should still replace mktemp because it’s deprecated, but we can’t
replace it with a function that creates the temporary file. Instead
we build a random filename ourselves.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Running `pg-upgradecluster` runs the `CREATE TEXT SEARCH DICTIONARY`
and `CREATE TEXT SEARCH CONFIGURATION` from
`zerver/migrations/0001_initial.py` on the new PostgreSQL cluster;
this requires that the stopwords file and dictionary exist _prior_
to `pg_upgradecluster` being run.
This causes a minor dependency conflict -- we do not wish to duplicate
the functionality from `zulip::postgres_appdb_base` which configures
those files, but installing all of `zulip::postgres_appdb_tuned` will
attempt to restart PostgreSQL -- which has not configured the cluster
for the new version yet.
In order to split out configuration of the prerequisites for the
application database, and the steps required to run it, we need to be
able to apply only part of the puppet configuration. Use the
newly-added `--config` argument to provide a more limited `zulip.conf`
which only applies `zulip::postgres_appdb_base` to the new version of
Postgres, creating the required tsearch data files.
This also preserves the property that a failure at any point prior to
the `pg_upgradecluster` is easily recoverable, by re-running
`zulip-puppet-apply`.
Otherwise, the useradd command will fail during the DigitalOcean
1-Click App installation because the install script is called
twice during the whole process. Plus the Zulip install script
is designed to be idempotent and this bug compromises that.
The value is a holdover from when it controlled runtime behavior,
which it no longer does.
Stop taking a DEPLOYMENT_TYPE, which is unused; the python code only
care about if the option exists, not its value.
These are more correct to the sense of "is this a service we
configured for Zulip", and removes potential confusion around the 0/1
values being backwards from how binary is usually interpreted.
Using checks of `,$PUPPET_CLASSES,` is repetitive and error-prone; it
does not properly deal with `zulip_ops::` classes, for instance, which
include the `zulip::` classes.
As alluded to in ca9d27175b, this can be fixed by inspecting the
classes that would be applied, using `puppet --write-catalog-summary`.
We work around the chicken-and-egg problem alluded to therein by
writing out as complete `zulip.conf` as would be necessary, before
running puppet and removing the sections we then know to not be
needed.
Unfortunately, there are two checks for `$PUPPET_CLASSES` which cannot
be switched to this technique, as they concern errors that we wish to
catch quite early, and thus before we have puppet installed. Since we
expect failures of those to only concern warnings, and only be
mistakenly omitted for internal `zulip_ops::` classes, this seems a
reasonable risk to admit in exchange for catching common errors early.
When supervisor is first installed, it is started automatically, and
creates the socket, owned by root. Subsequent reconfiguration in
puppet only calls `reread + update`, which is insufficient to apply
the `chown = zulip:zulip` line in `supervisord.conf`, leaving the
socket owned by `root` and the last part of the installation unable to
restart `supervisor` services as the `zulip` user. The `chown` line
in `scripts/lib/install` exists to paper over this.
Add a separate exec target for changes to `supervisord.conf` itself,
which restarts the full service. This leaves the default `restart`
action on the service for the lightweight `reread + update` action,
which is more common.
We use `systemctl` only on redhat-esque builds, because CI runs
Ubuntu, but init is not systemd in that context. `systemctl reload`
is sufficient to re-apply the socket ownership, but a full `restart`
and not `reload` is necessary under `/etc/init.d/supervisor`.
This is based on the existing steps in the documentation, with
additional changes now that the PostgreSQL version is stored in
`/etc/zulip/zulip.conf`.
49a7a66004 and immediately previous commits began installing
PostgreSQL 12 from their apt repository. On machines which already
have the distribution-provided version of PostgreSQL installed,
however, this leads to failure to apply puppet when restarting
PostgreSQL 12, as both attempt to claim the same port.
During installation, if we will be installing PostgreSQL, look for
other versions than what we will install, and abort if they are
found. This is safer than attempting to automatically uninstall or
reconfigure existing databases.
This allows for installing from-scratch with a different pinned
version of PostgreSQL, and provides a single place to change when the
default should increase.
Using `/etc/init.d/postgresql` as the detection of if Postgres is on
the server is incorrect, because this line runs _before_ puppet and
any packages are installed. Thus, it cannot tell the difference
between a new Ubuntu one-host first-time-install without PostgreSQL
yet, and one which is merely a front-end and will never have
PostgreSQL. This leads to failures in first-time installs:
```
Error: Evaluation Error: Error while evaluating a Function Call,
Could not find template 'zulip/postgresql//postgresql.conf.template.erb'
```
The only way to detect if PostgreSQL will be present in the _end_
state of the install is to examine the puppet classes that are
applied.
To do this, we must inspect `PUPPET_CLASSES`. Unfortunately, this can
be fragile to subclassing (e.g. `zulip_ops::postgres_appdb`). We
might desire to use `puppet apply --write-catalog-summary` to deduce
the _applied_ classes, which would unroll the inheritance; however,
this causes a chicken-and-egg problem, because `zulip.conf` must be
already written out (including a value for `postgresql.version`, if
necessary!) before such a puppet run could successfully complete.
Switch to predicating the `postgresql.version` key on the puppet
classes that are known to install postgres.
Support for Xenial and Stretch was removed (5154ddafca, 0f4b1076ad,
8944e0ad53, 79acd5ae40, 1219a2e854), but not all codepaths were
updated to remove their conditionals on it.
Remove all code predicated on Xenial or Stretch. debathena support
was migrated to Bionic, since that appears to be the current state of
existing debathena servers.
0f4b1076ad removed Ubuntu 16.04 "xenial" and Debian 9 "stretch" from
the printed list of supported operating systems, but left them in the
verification check that controls if that message is printed,
effectively continuing to support them.
Conversely, 439f0d3004 added Ubuntu 20.04 "focal" to the check, but
not to the printed list.
Synchronize to check and print the right supported distributions:
Ubuntu 18.04 "bionic", Ubuntu 20.04 "focal", and Debian 10 "buster".
The previous commit removed the only behavior difference between the
two flags; both of them skip user/database creation, and the tables
therein.
Of the two options `--no-init-db` is more explicit as to what it does,
as opposed to just one facet of when it might be used; remove
`--remote-postgres`.
Since `--postgres-missing-dictionaries` edits `/etc/zulip/zulip.conf`,
it interferes with the intent of `--no-overwrite-settings`.
Make the two settings conflict, to prevent this unclear state.
The `--no-init-db` option previously only controlled if
`initialize-database` was run, which sets up the tables inside the
database. If PostgreSQL was installed locally, it still attempted to
create the user and empty database.
This fails on hosts which are remote PostgreSQL hosts, and not
application hosts, as:
- They may already have a local database, and while
`initialize-datbase` will detect and offer to abort if one is
found,`--no-init-db` seems like it should be the option to not
overwrite it
- `flush-memcached` requires that a local venv be installed, which it
often is not on non-frontend machines.
Skip the database configuration when run with `--no-init-db`.
Since we now support Postgres versions from 10 to 12, we might as well
have new installations start on Postgres 12 to avoid unnecessary
migration/upgrade work.
We would prefer to use the postgres packages from Postgres themselves,
if available. However, this requires ensures that, for existing
installs, we preserve the same version of postgres as their base
distribution installed.
Move the version-determination logic from being computed at puppet
interpolation time, to being computed at install time and pinned into
zulip.conf.
This is a prep commit. Running terminate-psql-sessions command on
docker-zulip results in the script exiting with non-zero exit status
2. This is because the current session also gets terminated while
running terminate-psql-sessions command. To prevent that from happening
we don't terminate the session created by terminate-psql-sessions.
These files can’t use f-strings yet because they need to run in Python
2 or Python 3.5.
Generated by pyupgrade.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Generated by pyupgrade --py36-plus --keep-percent-format.
Now including %d, %i, %u, and multi-line strings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
certbot-auto doesn’t work on Ubuntu 20.04, and won’t be updated; we
migrate to instead using the certbot package shipped with the OS
instead. Also made sure that sure certbot gets installed when running
zulip-puppet-apply, to handle existing systems.
We already override the umask in upgrade-zulip-stage-2, but that’s too
late since we’ve already written a bunch of files in stage 1. I would
have removed the stage 2 override, but the OS upgrade documentation
references running stage 2 directly.
Fixes#15164. Note that an affected installation will need to upgrade
twice, because the first upgrade uses the old stage 1.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Ubuntu Focal comes with ruby 2.7 and the latest puppet
has some issues with it so to suppress puppet
warnings with ruby 2.7 we added RUBYOPT = "-W0" in
the environment.
This allows straight-forward configuration of realm-based Tornado
sharding through simply editing /etc/zulip/zulip.conf to configure
shards and running scripts/refresh-sharding-and-restart.
Co-Author-By: Mateusz Mandera <mateusz.mandera@zulip.com>
While this functionality to post slow queries to a Zulip stream was
very useful in the early days of Zulip, when there were only a few
hundred accounts, it's long since been useless since (1) the total
request volume on larger Zulip servers run by Zulip developers, and
(2) other server operators don't want real-time notifications of slow
backend queries. The right structure for this is just a log file.
We get rid of the queue and replace it with a "zulip.slow_queries"
logger, which will still log to /var/log/zulip/slow_queries.log for
ease of access to this information and propagate to the other logging
handlers. Reducing the amount of queues is good for lowering zulip's
memory footprint and restart performance, since we run at least one
dedicated queue worker process for each one in most configurations.
Yes, it's slightly janky to create an
argparse.Namespace object like this, but it
saves us from shelling out to a script whose
only real value-add is parsing a single
`threshold_days` argument.
This saves about 130ms for a no-op provision.
We try to avoid importing Django settings unless
we really need them, since we want this program
to run very quickly during `provision` (when
secrets have already been generated earlier).
Since in travis we don't have root access so we used to add different
srv path. As now we shifted our production suites to Circle CI
we don't need that code so removed it.
Also we used a hacky code in commit-lint-message for travis which is
now of no use.
Now that we've cleaned up this tool's output, there's no reason to use
an awkward mechanism to hide its output; we can just print it out like
a normal program.
Fixes#14644; resolves#14701.
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
Since now we want to use production suites on Circle CI so there
is no need to set TRAVIS in env while running scripts.
CIRCLECI is set default in the enviroment of Circle CI builds
so we can use it directly.
Also Travis CI had rabbitmq-server installed so we had to add workaround
in install script to avoid the error. That workaround is removed.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
We stopped using tsearch-extras in Zulip 2.1.0 after Anders figured
out how to achieve its goals with native postgres. However, we never
did a `DROP EXTENSION` on systems thta had upgraded, which meant that
backups created on systems originally installed with Zulip 2.0.x and
older, and later upgraded to Zulip 2.1.x, could not be restored on
Zulip servers created with a fresh install of Zulip 2.1.x.
We can't do this with a normal database migration, because DROP
EXTENSION has to be done as the postgres user, so we add some custom
migration code in the upgrade-zulip-stage-2 tool.
It's safe to run this whenever tsearch_extras.control is installed because:
* Zulip is AFAIK the only software that ever used tsearch_extras.
* The package was only installed via puppet on production servers configured to
run a local Zulip database.
* We'll only run this code once per system, because it removes the
package and thus the control files.
Fixes#13612.
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
After some testing, I've confirmed that this seems to behave
significantly better in terms of the number of failed requests due to
Tornado being the process of restarting compared with the previous
version, as each individual process is only down for a short time,
rather than all of them being down at once.
Missing commas in the definition of all the queues to check meant that it would be looking for queues with concatenated names, rather than the correct ones. Added the commas.
Used get_venv_dependencies function to return the correct dependencies
for RHEL, Centos, Fedora rather than importing them as separate
COMMON_YUM_DEPENDENCIES in provision and create-production-venv.
In virtualenv ≥ 20, the site_packages variable was removed from
activate_this.py. To avoid a KeyError, replace
activate_locals['site_packages'] with os.path.join(venv, 'lib',
python_version), where python_version is the 'pythonX.Y' name of the
directory where site-packages resides in the virtualenv.
Fixes#14025.
Added a get_venv_dependencies() function in setup_venv.py which
returns VENV_DEPENDENCIES according to the vendor and os_version.
The reason for adding this function was because python-dev will be
depreciated in Focal but can be used as python2-dev so when adding
support for Focal VENV_DEPENDENCIES should to be os_version dependent.
isort 5 knows not to reorder imports across function calls, so this
will stop isort from breaking our code.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This adds Ubuntu 19.10 as a valid provisioning target.
The release test in setup-apt-repo was changed from a list of values to
a regex check for brevity.
The “Smileys & People” category has been split into “Smilys & Emotion”
and “People & Body”.
Also, fix generate_sha1sum_emoji to read the emoji-datasource-google
version from yarn.lock, since package.json only gives a version range.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
"Zulip Voyager" was a name invented during the Hack Week to open
source Zulip for what a single-system Zulip server might be called, as
a Star Trek pun on the code it was based on, "Zulip Enterprise".
At the time, we just needed a name quickly, but it was never a good
name, just a placeholder. This removes that placeholder name from
much of the codebase. A bit more work will be required to transition
the `zulip::voyager` Puppet class, as that has some migration work
involved.
These docstrings hadn't been properly updated in years, and bad an
awkward mix of a bad version of the user-facing documentation and
details that are no longer true (e.g. references to "Voyager").
(One important detail is that we have real documentation for this
system now).
This legacy cross-realm bot hasn't been used in several years, as far
as I know. If we wanted to re-introduce it, I'd want to implement it
as an embedded bot using those common APIs, rather than the totally
custom hacky code used for it that involves unnecessary queue workers
and similar details.
Fixes#13533.
At some point the PostgreSQL Docker image started creating the zulip
database for us, which caused our CREATE DATABASE to fail.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our recent fixes to using the system's configured memcached settings
broke populate_db, because its hacky clear_database helper is called
with a hacked-up settings module.
We fix this by first moving this out-of-place code from models.py into
populate_db, and then saving the settings required to access memcached
so that we can use them in clear_database.
We also fix a mypy erorr in flush-memcached that matches the same
issue fixed in clear_database.
This simplifies the RDS installation process to avoid awkwardly
requiring running the installer twice, and also is significantly more
robust in handling issues around rerunning the installer.
Finally, the answer for whether dictionaries are missing is available
to Django for future use in warnings/etc. around full-text search not
being great with this configuration, should they be required.
We'll be soon documenting a production workflow that involves using
it, and that means it needs to live under scripts/ (since tools/ isn't
present in release tarballs).
`copytree` throws an error if the target already exists, and we don’t
really want to rerun the copy anyway.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Ultimately, this isn't an effective way to monitor this queue; we want
time-based monitoring, not count-based monitoring. Doing that
properly will likely involve modifying the queue processor to write
something about its status.
But until we add the monitoring we want, it makes sense to leave this
active with low limits.
This is needed on at least Debian 10, otherwise xmlsec fails to
install: `Could not find xmlsec1 config. Are libxmlsec1-dev and
pkg-config installed?`
Also remove libxmlsec1-openssl, which libxmlsec1-dev already depends.
(No changes are needed on RHEL, where libxml2-devel and xmlsec1-devel
already declare a requirement on /usr/bin/pkg-config.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The output log from running clean_unused_caches was too verbose as
part of the `upgrade-zulip` overall output. While this output is
potentially helpful when running it directly for debugging, it's
certainly redundant for the main production use case.
So a new flag --no-print-headers is introduced. It suppresses the
header outputs for the subtools.
Fixes#13214.
This allows the system to get updates to the Groonga repository
signing key, so `apt update` doesn’t start failing when the key
changes (like it recently did).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
debian-archive-keyring is a dependency of the essential package apt,
so it is present in every Debian system.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
virtualenv on Ubuntu 16.04, when creating a new environment, downloads
the current version of setuptools, then replaces its pkg_resources
with an old copy from
/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl.
This causes problems, a simple example of which is reproducible from
the ubuntu:16.04 Docker base image as follows:
apt-get update
apt-get -y install python3-virtualenv
python3 -m virtualenv -p python3 /ve
/ve/bin/pip install sockjs-tornado
/ve/bin/pip download sockjs-tornado
→ `AttributeError: '_NamespacePath' object has no attribute 'sort'`
More relevantly, it breaks pip-compile in the same way. To fix this,
we need to force setuptools to be reinstalled, even if we’re asking
for the same version.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This should dramatically improve the queue processor's performance in
cases where there's a very high volume of requests on a given endpoint
by a given user, as described in the new docstring.
Until we test this more broadly in production, we won't know if this
is a full solution to the problem, but I think it's likely. We've
never seen the UserActivityInterval worker end up backlogged without a
total queue processor outage, and it should have a similar workload.
Fixes#13180.
To replace DISTRIB_FAMILY, there’s now an os_families function using
the standard ID and ID_LIKE information in /etc/os-release.
Fixes#13070; fixes#13071.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We no longer use tsearch_extras, and the camo patch is irrelevant on
systemd systems (Xenial and newer). So we no longer need to
provide/install a PPA at all.
Closes#13027.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Now that we're implemented tsearch_extras in pure postgres, we no
longer need a custom extension. This should help us considerably, as
it means we no longer need to ship custom apt packages at all.
Fixes#467.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
As predicted in https://www.kb.cert.org/vuls/id/319816/, a malicious
worm is beginning to spread across the npm ecosystem through package
postinstall scripts. Only instead of direct self-replicating code,
the replication vector is the temptation to monetize postinstall
scripts by polluting the console logs with paid advertisements. The
effect will be the same unless we all put a stop to this while we
still can.
Apply the recommended VU#319816 workaround, which is to disable
lifecycle scripts when installing npm packages. The only fallout is:
* node-sass can’t run because it uses compiled native code; we replace
it with Dart Sass.
* phantomjs-prebuilt doesn’t download the binary at install time; we
tell it to download it in run-casper.
* ttf2woff2 transparently falls back from native code to an Emscripten
build.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit finishes adding end-to-end support for the install script
on Debian Buster (making it production ready). Some support for this
was already added in prior commits such as
99414e2d96.
We plan to revert the postgres hunks of this once we've built
tsearch_extras for our packagecloud archive.
Fixes#9828.