Apparently, perl at least expects LANG, LANGUAGE, and LC_ALL to be
consistent, and thus apt spits out a bunch of warnings if these are
different. So if we're forcing LC_ALL in these installer/upgrade
script blocks, we should force the rest too.
I believe this fixes the remaining locale part of #9946.
--agree-tos is useful for the Docker environment, where we won't have
an interactive shell present for agreeing to the ToS.
--deploy-hook is also useful for the Docker environment; it makes it
possible to customize what deploy hook (if any) we pass into the
underlying cerbot command.
This migrates Zulip to use a dramatically better set of names and
aliases for our emoji set, defined in emoji_names.py (which is in turn
manually generated from our hand-curated CSV file).
This should significantly improve the experience of using Zulip's
emoji picker and emoji typeahead for finding what one is looking for.
We were already correctly including libssl-dev in Zulip's dependencies
in development environment provisioning, but (at least now) it's
needed to build certain Python packages like pycurl when building a
Zulip virtualenv in production. I haven't investigated why we didn't
need this on Ubuntu, but one possible reason would be that some other
library in our dependencies list happens to depend on it on Ubuntu.
We fix this by moving the dependency over to the shared
VENV_DEPENDENCIES list.
Fixes part of #9946.
Apparently, at least some Debian stretch systems don't have an
/etc/lsb-release, so the optimization that we did in
5d39a0f0fc broke our installer on
Debian.
We fix this, by falling back to calling the lsb_release command on
systems that don't have a faster way to do it.
Fixes part of #9946.
This commits adds the necessary puppet configuration and
installer/upgrade code for installing and managing the thumbor service
in production. This configuration is gated by the 'thumbor.pp'
manifest being enabled (which is not yet the default), and so this
commit should have no effect in a default Zulip production environment
(or in the long term, in any Zulip production server that isn't using
thumbor).
Credit for this effort is shared by @TigorC (who initiated the work on
this project), @joshland (who did a great deal of work on this and got
it working during PyCon 2017) and @adnrs96, who completed the work.
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This saves about 400ms when running clean-unused-caches, basically by
calling its sub-rountines by import (rather than
`subprocess.check_call()`). The performance optimization seems well worth it.
Fixes#9766.
This file looks like it's producing some kind of compilation of the
mobile strings, that the mobile app will somehow end up using --
especially as it refers to its output as a "resource file". In
reality, it compiles statistics to be included in the language-picker
UI in the web app. Give appropriate names to the identifiers so it's
less confusing.
This improves the performance of these operations, by saving a ~50ms
Python process startup. While not a major performance improvement, it
seems worth it, given how often these commands get run.
Fixes#9571.
Structurally, this queue has the same property as the missed_message
one, namely that it accumulates things and processes them only every
few minutes.
This should stop Zulip from paging in response to slow queries
accumulating when a server restart happens.
On newer distros like Xenial, Stretch, etc., we were incorrectly not
installing the Python 3 version of the virtualenv package. This was
accidentally working because most base images with Python already have
this package too, but this was failing to install the right
dependencies in our Docker builds, requiring unnecessary manual code.
We fixed this some time ago for provision.py, but not for production.
This is multi-stage build which first builds tsearch-extras with the
current version of postgres and then configs postgres for zulip. The
zulip config installs the hunspell dictionaries, stop words file,
tsearch-extras, and creates the initial database.
**Testing Plan:**
1) `docker-compose up` the existing config.
2) Build the new image
3) Edit docker-compose.yml to use the new image id
4) `docker-compose up` and verify full text search is still working.
The docker installer configuration incorrectly had has_appserver set
to 0; this meant that (A) the docker-zulip code needed to copy the
block of code in the installer for the `has_appserver` case into the
Dockerfile (unnecessarily), and (B) one couldn't use `install` from a
Git ref (because the static asset compiler didn't end up in the right
place).
It appears that docker-zulip tried to set this flag in their `install`
command line, but the construction inside `install` meant that didn't
work.
This fixes adding the Ubuntu repositories for Debian, as well as makes
sure that we install the debian-archive-keyring package on Debian,
which is only priority important (and thus might be missing).
It wasn't obvious reading this message that you can perfectly well
bring your own SSL/TLS certificate; unless you read quite a bit
between the lines where we say "could not find", or followed the link
to the detailed docs, the message sounded like you had to either use
--certbot or --self-signed-cert.
So, explicitly mention the BYO option. Because the "complete chain"
requirement is a bit tricky, don't try to give instructions for it
in this message; just refer the reader to the docs.
Also, drop the logic to identify which of the files is missing; it
certainly makes the code more complex, and I think even the error
message is actually clearer when it just gives the complete list of
required files -- it's much more likely that the reader doesn't know
what's required than that they do and have missed one, and even then
it's easy for them to look for themselves.
This fixes a bug where provision was failing since our most recent
upgrade to yarn/nvm/node.
It turns out my original fix was the correct fix, but to the wrong
third-party tool: nvm, not yarn, was the offender.
Apparently, new versions of yarn use the HOME environment variable to
figure out where to access their configuration, and sudo apparently
doesn't clear that variable, so install-node was being run with HOME
set to something under /home/vagrant (e.g.).
Fix this by just setting that environment variable correctly.
This replaces 250a036ff8, which
misdiagnosed the issue.
It appears that some change in yarn's versioning system means that
installing yarn itself ends up chowning its config directory
incorrectly to be owned by root, preventing `yarn install` from
working later.
node -> v8.9.4
yarn -> 1.5.1
nvm -> 0.33.8
Also updates a test in timerender.js which depends on time
provided by node which is now changed in newer release.
Some changes have been made in circeci script, we just create ~/.config
directory and chown it to circleci user so installing new version of yarn
does not cause any ci failure on circleci during provision.
This is the analog of 7b2c9223e7 for the
emoji cache; the only difference is that the existing code was working
correctly. It's still worth changing for improved robustness.
We saw issues with /srv/zulip_npm_cache being cleaned incorrectly by
this tool in production (more correctly, we noticed broken symlinks to
those directories, even from the current deployment). Print-debugging
showed that indeed older deployments were being ignored, because the
logic for `get_caches_in_use` was totally broken (this was sorta
masked because we also keep the last week's deployments).
The specific bug here turned out to be that we weren't passing the
`production` argument to generate_sha1sum_node_modules, but the
broader problem is that this logic isn't robust to changes in the
hashing algorithm.
Fix this by replacing the broken logic for trying to compute the
correct hash for that deployment with just checking the symlink inside
the deployment to let it self-report.
We can't easily do this same change for clean-venv-cache, because we
use multiple virtualenvs there. But a similar change could be useful
for the emoji cache as well.
Fixes#8116.
This is apparently installed by the perl package; I hadn't even known
it existed. We of course want to use the sha1sum command from
coreutils.
Fixes#8836.