The ids that will be used for each particular run of the test suite are
written to a unique file. Each file will then be used as a time
reference of when the suite was ran.
This change sets up the ability for a complete clean up of potentially
leaked database templates.
Tweaked by tabbott to remove these files after successful database
cleanup.
We use `git describe --tags` to get information about the number of commit since
the last major version, and the sha of the current HEAD. This is added to the
ZULIP_VERSION when a deploy is done from `git`.
Modified heavily by punchagan to:
* to use git describe instead of `git log` and `wc`
* use a separate script to run the git describe command
* write the file with version info to var/ and remove it from the repo
Fixes#4685.
Previously, if you restored onto a different production system from
the one where you took the backup, backup restoration would fail
because the generated rabbitmq passwords for the two systems would be
different, and we didn't update the restored system to use the
password from the original system.
Fixes#12114.
This should ensure that we apply any special configuration for the
database system (e.g. installing `pgroonga`) before we try to restore
the database contents from the archive.
For pgroonga in particular, this is important so that we can preserve
the configuration of the extension in the `pg_restore` process.
Fixes#12345.
With the S3 file upload backend, we don't store uploads locally, so
the `uploads` directory in the backup will be empty, and more
importantly, LOCAL_UPLOADS_DIR will be None, which the previous code
crashed on.
API changes:
* The behaviour of Date.toLocaleTimeString() reverts to pre 8.0.0,
this only affects automated tests. Lots of other API changes but
we didn't use any of those.
* The internal sorting algorithm changed which causes one of our own
compare function to miss coverage.
Simulate isn’t enough in some cases. The error message when this
fails looks sufficiently non-alarming.
LXC:
default: + apt-get -dy install lsb-release apt-transport-https gnupg
default: Reading package lists...
default: Building dependency tree...
default:
default: Reading state information...
default: lsb-release is already the newest version.
default: gnupg is already the newest version.
default: The following NEW packages will be installed:
default: apt-transport-https
default: 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
default: Need to get 25.1 kB of archives.
default: After this operation, 238 kB of additional disk space will be used.
default: Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main apt-transport-https amd64 1.0.1ubuntu2.3
default: 404 Not Found [IP: 91.189.88.161 80]
default: Err http://security.ubuntu.com/ubuntu/ trusty-security/main apt-transport-https amd64 1.0.1ubuntu2.3
default: 404 Not Found [IP: 91.189.88.161 80]
default: E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt-transport-https_1.0.1ubuntu2.3_amd64.deb 404 Not Found [IP: 91.189.88.161 80]
default:
default: E: Some files failed to download
default: + apt-get update
[…]
default: Fetched 4,504 kB in 7s (611 kB/s)
default: Reading package lists...
default: + apt-get -y install lsb-release apt-transport-https gnupg
default: Reading package lists...
Docker:
default: + apt-get -dy install lsb-release apt-transport-https gnupg
default: Reading package lists...
default: Building dependency tree...
default:
default: Reading state information...
default: Package gnupg is not available, but is referred to by another package.
default: This may mean that the package is missing, has been obsoleted, or
default: is only available from another source
default: E: Package 'gnupg' has no installation candidate
default: + apt-get update
[…]
default: Fetched 16.2 MB in 5s (3,326 kB/s)
default: Reading package lists...
default: + apt-get -y install lsb-release apt-transport-https gnupg
default: Reading package lists...
(All in green.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We have been semi-accidentally relying on the fact that terminate-psql-sessions
fails silently when there are PIDs we don't have permission to terminate.
This actually happens somewhat often, generally when we're doing a series of
operations in quick succession by different users, because postgres processes
live a little longer than the `psql` shell that started them.
As part of adding ON_STOP_ERROR to all of our postgres commands, it makes
sense to enforce we don't fail here, but that means we need to actually filter
the target PIDs to only ones we can actually kill.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Also use psql -e (--echo-queries) in scripts that use ‘set -x’, so
errors can be traced to a specific query from the output.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The comment explains this in more detail, but basically one previously
needed the `--from-git` option to `upgrade-zulip-stage-2` if one had
last installed/upgraded from Git, and not that option otherwise, which
would have forced us to make the OS upgrade documentation much more
complicated than it needed to be.
Fixes permission errors when running restore-backup on a tarball
inaccessible to the zulip user.
Fixes#12125.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
activate_this.py has always documented that it should be exec()ed with
locals = globals, and in virtualenv 16.0.0 it raises a NameError
otherwise.
As a simplified demonstration of the weird things that can go wrong
when locals ≠ globals:
>>> exec('a = 1; print([a])', {}, {})
[1]
>>> exec('a = 1; print([a for b in [1]])', {}, {})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
NameError: name 'a' is not defined
>>> exec('a = 1; print([a for b in [1]])', {})
[1]
Top-level assignments go into locals, but from inside a new scope like
a list comprehension, they’re read out of globals, which doesn’t work.
Fixes#12030.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In addition to upgrading dependencies being generally useful, this may
fix situations where yarn fails but returns a success status code in the
presence of an HTTP proxy.
The commit 87d1809657 changed the time when
digests are sent by 3 hours to account for moving from the US East Coast to the
West Coast, but didn't change the time period exception in the
`check-rabbitmq-queue` script.
Closes#5415
Now that we have the run_as_root helper function, we don't need to
install sudo to run Zulip in production
This reverts commit a7d7d181ea.
Fixes#10036.
Few folks will be upgrading from versions of Zulip old enough to not
have virtualenv-clone, and those who are won't be able to use it due
to older dependencies having been removed.
Apparently, while upgrade-zulip-from-git always ensures that zulip
deployment directories are owned by the Zulip user, unpack-zulip (aka
the tarball code path) has them owned by root.
The user ID detection logic in su_to_zulip's helper get_zulip_uid was
intended to support both development environments (where the user ID
might vary) and production environments. For development
environments, the existing code is fine, but given this unpack-zulip
permissions issue, we need to have code to fallback to 'zulip' if the
detection logic detects the "zulip" user has having UID 0.
There’s no reason to do this unless you’re, like, trying to trip the
Let’s Encrypt rate limits (or perhaps trying to manually test this code).
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Apparently, virtualenv-clone ends up copying the success-stamp file
that we use to track whether a virtualenv was successfully
provisioned, which results in problems if we get a network error in
the pip install stage afterwards.
The comment explains our fix, but basically we just delete
success-stamp after the clone.
Fixes#11301.
On usage errors (except --help), write usage message to stderr and
exit with nonzero status.
Forbid setting the hostname and email to the example values. Those
are specifically checked for and would fail later.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Instead, manually activate it in the one place where this
functionality was used (tools/lib/provision.py). This way we avoid
trying to activate the Python 2 thumbor virtualenv from Python 3.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Nowm unless you specify `--fill-cache`, memcached caches will not be
pre-filled after a server restart. This will be helpful when someone
is in a hurry (e.g. if the server is down right now, or if he/she
testing a configuration change in a newly setup server), it's best to
just restart without pre-filling the cache.
Fixes: #10900.
The site_packages variable points to (e.g.)
zulip-py3-venv/lib/python3.4/site-packages. If that doesn’t exist,
we’re probably running the wrong Python version.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
We still create a Python 2 virtualenv for thumbor but that’s
separate (/srv/zulip-thumbor-venv from
scripts/lib/create-thumbor-venv).
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Otherwise this causes an error
```
AttributeError: type object 'Callable' has no attribute '_abc_registry'
```
on 3.7. While the error is specific to 3.7, it is safer to uninstall
typing for all the versions that don't require a pip-provided typing
library.
/bin/sh and /usr/bin/env are the only two binaries that NixOS provides
at a fixed path (outside a buildFHSUserEnv sandbox).
This discussion was split from #11004.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This is a common bug that users might be tempated to introduce.
And also fix two instances of this bug that were present in our
codebase, including an important one in our upgrade code path.
This makes it possible to add --skip-purge-old-deployments in the
deploy_options section of /etc/zulip/zulip.conf, and control whether
old deployments are purged automatically on a system.
We still need to do https://github.com/zulip/zulip/issues/10534 and
probably also to add these arguments to be directly passed into
upgrade-zulip, but that can wait for future work.
Fixes#10946.
This commit works by vendoring the couple functions we still use from
puppetlabs stdlib (join and range), but removing the rest of the
puppetlabs codebase, and of course cleaning up our linter rules in the
process.
Fixes#7423.
Since yarn has a package.json conveniently available, we can parse
that with jq, saving the expensive operation of starting up yarn.
This saves ~300ms in a no-op provision.
This makes it possible for the Puppet codebase to access the path to
the relevant /home/zulip/deployments type directory that puppet was
run from, which in turn makes it possible to safely call scripts from
here.
Based on work by Rein Zustand.
Apparently, we were incorrectly expressing the paths in the
caches_in_use data structures for these two cache-cleaning algorithms,
resulting in the default threshhold_days algorithm controlling which
caches could be garbage-collected. While the emoji one was just a
performance optimization for upgrade-zulip-from-git, it was possible
for the main `node_modules` cache in use in production to be GCed,
resulting in LaTeX rendering being broken.
This fixes an actual user-facing issue in our mobile push
notifications documentation (where we were incorrectly failing to
quote the argument to `./manage.py register_server` making it not
work), as well as preventing future similar issues from occurring
again via a linter rule.
Apparently, on Debian stretch, the gnupg package isn't installed by
default, which means that our `apt-key add` commands were failing with
these errors on an ultra-minimal Debian installation:
+ apt-key add ./scripts/setup/packagecloud.asc
E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
+ apt-key add ./scripts/setup/pgroonga-debian.asc
E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation
Fixes#10480.