The “Smileys & People” category has been split into “Smilys & Emotion”
and “People & Body”.
Also, fix generate_sha1sum_emoji to read the emoji-datasource-google
version from yarn.lock, since package.json only gives a version range.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
"Zulip Voyager" was a name invented during the Hack Week to open
source Zulip for what a single-system Zulip server might be called, as
a Star Trek pun on the code it was based on, "Zulip Enterprise".
At the time, we just needed a name quickly, but it was never a good
name, just a placeholder. This removes that placeholder name from
much of the codebase. A bit more work will be required to transition
the `zulip::voyager` Puppet class, as that has some migration work
involved.
These docstrings hadn't been properly updated in years, and bad an
awkward mix of a bad version of the user-facing documentation and
details that are no longer true (e.g. references to "Voyager").
(One important detail is that we have real documentation for this
system now).
This legacy cross-realm bot hasn't been used in several years, as far
as I know. If we wanted to re-introduce it, I'd want to implement it
as an embedded bot using those common APIs, rather than the totally
custom hacky code used for it that involves unnecessary queue workers
and similar details.
Fixes#13533.
At some point the PostgreSQL Docker image started creating the zulip
database for us, which caused our CREATE DATABASE to fail.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our recent fixes to using the system's configured memcached settings
broke populate_db, because its hacky clear_database helper is called
with a hacked-up settings module.
We fix this by first moving this out-of-place code from models.py into
populate_db, and then saving the settings required to access memcached
so that we can use them in clear_database.
We also fix a mypy erorr in flush-memcached that matches the same
issue fixed in clear_database.
This simplifies the RDS installation process to avoid awkwardly
requiring running the installer twice, and also is significantly more
robust in handling issues around rerunning the installer.
Finally, the answer for whether dictionaries are missing is available
to Django for future use in warnings/etc. around full-text search not
being great with this configuration, should they be required.
We'll be soon documenting a production workflow that involves using
it, and that means it needs to live under scripts/ (since tools/ isn't
present in release tarballs).
`copytree` throws an error if the target already exists, and we don’t
really want to rerun the copy anyway.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Ultimately, this isn't an effective way to monitor this queue; we want
time-based monitoring, not count-based monitoring. Doing that
properly will likely involve modifying the queue processor to write
something about its status.
But until we add the monitoring we want, it makes sense to leave this
active with low limits.
This is needed on at least Debian 10, otherwise xmlsec fails to
install: `Could not find xmlsec1 config. Are libxmlsec1-dev and
pkg-config installed?`
Also remove libxmlsec1-openssl, which libxmlsec1-dev already depends.
(No changes are needed on RHEL, where libxml2-devel and xmlsec1-devel
already declare a requirement on /usr/bin/pkg-config.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The output log from running clean_unused_caches was too verbose as
part of the `upgrade-zulip` overall output. While this output is
potentially helpful when running it directly for debugging, it's
certainly redundant for the main production use case.
So a new flag --no-print-headers is introduced. It suppresses the
header outputs for the subtools.
Fixes#13214.
This allows the system to get updates to the Groonga repository
signing key, so `apt update` doesn’t start failing when the key
changes (like it recently did).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
debian-archive-keyring is a dependency of the essential package apt,
so it is present in every Debian system.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
virtualenv on Ubuntu 16.04, when creating a new environment, downloads
the current version of setuptools, then replaces its pkg_resources
with an old copy from
/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl.
This causes problems, a simple example of which is reproducible from
the ubuntu:16.04 Docker base image as follows:
apt-get update
apt-get -y install python3-virtualenv
python3 -m virtualenv -p python3 /ve
/ve/bin/pip install sockjs-tornado
/ve/bin/pip download sockjs-tornado
→ `AttributeError: '_NamespacePath' object has no attribute 'sort'`
More relevantly, it breaks pip-compile in the same way. To fix this,
we need to force setuptools to be reinstalled, even if we’re asking
for the same version.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This should dramatically improve the queue processor's performance in
cases where there's a very high volume of requests on a given endpoint
by a given user, as described in the new docstring.
Until we test this more broadly in production, we won't know if this
is a full solution to the problem, but I think it's likely. We've
never seen the UserActivityInterval worker end up backlogged without a
total queue processor outage, and it should have a similar workload.
Fixes#13180.
To replace DISTRIB_FAMILY, there’s now an os_families function using
the standard ID and ID_LIKE information in /etc/os-release.
Fixes#13070; fixes#13071.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We no longer use tsearch_extras, and the camo patch is irrelevant on
systemd systems (Xenial and newer). So we no longer need to
provide/install a PPA at all.
Closes#13027.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Now that we're implemented tsearch_extras in pure postgres, we no
longer need a custom extension. This should help us considerably, as
it means we no longer need to ship custom apt packages at all.
Fixes#467.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
As predicted in https://www.kb.cert.org/vuls/id/319816/, a malicious
worm is beginning to spread across the npm ecosystem through package
postinstall scripts. Only instead of direct self-replicating code,
the replication vector is the temptation to monetize postinstall
scripts by polluting the console logs with paid advertisements. The
effect will be the same unless we all put a stop to this while we
still can.
Apply the recommended VU#319816 workaround, which is to disable
lifecycle scripts when installing npm packages. The only fallout is:
* node-sass can’t run because it uses compiled native code; we replace
it with Dart Sass.
* phantomjs-prebuilt doesn’t download the binary at install time; we
tell it to download it in run-casper.
* ttf2woff2 transparently falls back from native code to an Emscripten
build.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit finishes adding end-to-end support for the install script
on Debian Buster (making it production ready). Some support for this
was already added in prior commits such as
99414e2d96.
We plan to revert the postgres hunks of this once we've built
tsearch_extras for our packagecloud archive.
Fixes#9828.
Delete trailing newlines from all files, except
tools/ci/success-http-headers.txt and tools/setup/dev-motd, where they
are significant, and static/third, where we want to stay close to
upstream.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
As a result of dropping support for trusty, we can remove our old
pattern of putting `if False` before importing the typing module,
which was essential for Python 3.4 support, but not required and maybe
harmful on newer versions.
cron_file_helper
check_rabbitmq_consumers
hash_reqs
check_zephyr_mirror
check_personal_zephyr_mirrors
check_cron_file
zulip_tools
check_postgres_replication_lag
api_test_helpers
purge-old-deployments
setup_venv
node_cache
clean_venv_cache
clean_node_cache
clean_emoji_cache
pg_backup_and_purge
restore-backup
generate_secrets
zulip-ec2-configure-interfaces
diagnose
check_user_zephyr_mirror_liveness
The comment that tabbott edited into my commit while wimpifying this
function is wrong on multiple levels.
Firstly, the way in which users might be “running our scripts” was
never relevant. `__file__` is not the script that the user ran, it’s
zulip_tools.py itself. What matters is not how the user ran the
script, but rather how zulip_tools was imported. If zulip_tools was
imported as scripts.lib.zulip_tools, then `__file__` must end with
`scripts/lib/zulip_tools.py`, so running dirname three times on it is
fine. In fact, in Python ≥ 3.4 (we don’t support anything older),
`__file__` in an imported module is always an absolute path, so it
must end with `scripts/lib/zulip_tools.py` in any case.
(At present, there’s one script that imports lib.zulip_tools, and the
installer runs scripts/lib/zulip_tools.py as a script, but those uses
don’t hit this function.)
Secondly, even if we do care about `__file__` being a funny relative
path, there’s still no reason to have two calls to `realpath`.
`realpath(dirname(dirname(dirname(realpath(…)))))` is equivalent to
`dirname(dirname(dirname(realpath(…)))), as the inner `realpath` has
already canonicalized symlinks at every level.
This version also deals with `__file__` being a funny relative
path (assuming none of scripts, lib, and zulip_tools.py are themselves
symlinks), while making fewer `lstat` calls than either of the above
constructions.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This tool can be used to update the API field of local
zuliprc files for dummy users of development server
(iago, prospero, etc) with the correct API key from database.
This tool can be run after provisioning (or similar tools) which change
the API keys in the database.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Apparently, the `chown -R` would never run if the original clone
attempt had networking errors, leading to inability to use
upgrade-zulip-from-git without manual intervention.
Previously, it didn't properly update the stamp files that determine
our caching behavior, so if one ran test-backend afterwards, nothing
would happen.
A secondary issue that this commit does not fix is that provision will
end up rerunning the whole thing.
The ids that will be used for each particular run of the test suite are
written to a unique file. Each file will then be used as a time
reference of when the suite was ran.
This change sets up the ability for a complete clean up of potentially
leaked database templates.
Tweaked by tabbott to remove these files after successful database
cleanup.
We use `git describe --tags` to get information about the number of commit since
the last major version, and the sha of the current HEAD. This is added to the
ZULIP_VERSION when a deploy is done from `git`.
Modified heavily by punchagan to:
* to use git describe instead of `git log` and `wc`
* use a separate script to run the git describe command
* write the file with version info to var/ and remove it from the repo
Fixes#4685.
Previously, if you restored onto a different production system from
the one where you took the backup, backup restoration would fail
because the generated rabbitmq passwords for the two systems would be
different, and we didn't update the restored system to use the
password from the original system.
Fixes#12114.
This should ensure that we apply any special configuration for the
database system (e.g. installing `pgroonga`) before we try to restore
the database contents from the archive.
For pgroonga in particular, this is important so that we can preserve
the configuration of the extension in the `pg_restore` process.
Fixes#12345.
With the S3 file upload backend, we don't store uploads locally, so
the `uploads` directory in the backup will be empty, and more
importantly, LOCAL_UPLOADS_DIR will be None, which the previous code
crashed on.
API changes:
* The behaviour of Date.toLocaleTimeString() reverts to pre 8.0.0,
this only affects automated tests. Lots of other API changes but
we didn't use any of those.
* The internal sorting algorithm changed which causes one of our own
compare function to miss coverage.
Simulate isn’t enough in some cases. The error message when this
fails looks sufficiently non-alarming.
LXC:
default: + apt-get -dy install lsb-release apt-transport-https gnupg
default: Reading package lists...
default: Building dependency tree...
default:
default: Reading state information...
default: lsb-release is already the newest version.
default: gnupg is already the newest version.
default: The following NEW packages will be installed:
default: apt-transport-https
default: 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
default: Need to get 25.1 kB of archives.
default: After this operation, 238 kB of additional disk space will be used.
default: Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main apt-transport-https amd64 1.0.1ubuntu2.3
default: 404 Not Found [IP: 91.189.88.161 80]
default: Err http://security.ubuntu.com/ubuntu/ trusty-security/main apt-transport-https amd64 1.0.1ubuntu2.3
default: 404 Not Found [IP: 91.189.88.161 80]
default: E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt-transport-https_1.0.1ubuntu2.3_amd64.deb 404 Not Found [IP: 91.189.88.161 80]
default:
default: E: Some files failed to download
default: + apt-get update
[…]
default: Fetched 4,504 kB in 7s (611 kB/s)
default: Reading package lists...
default: + apt-get -y install lsb-release apt-transport-https gnupg
default: Reading package lists...
Docker:
default: + apt-get -dy install lsb-release apt-transport-https gnupg
default: Reading package lists...
default: Building dependency tree...
default:
default: Reading state information...
default: Package gnupg is not available, but is referred to by another package.
default: This may mean that the package is missing, has been obsoleted, or
default: is only available from another source
default: E: Package 'gnupg' has no installation candidate
default: + apt-get update
[…]
default: Fetched 16.2 MB in 5s (3,326 kB/s)
default: Reading package lists...
default: + apt-get -y install lsb-release apt-transport-https gnupg
default: Reading package lists...
(All in green.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>