In scripts/lib/certbot-maybe-renew line 8:
case "$(echo "$value" | tr A-Z a-z)" in
^-- SC2019: Use '[:upper:]' to support accents and foreign alphabets.
^-- SC2018: Use '[:lower:]' to support accents and foreign alphabets.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This flag is used to track which user/message pairs correspond to an
active mobile push notification, that should potentially be cleared
when the user reads the message.
This flag should never appear on a message that is also marked as
read; eventually we may want a cron job to check for that condition.
We include a partial index on UserMessage for this flag.
Apparently, our Python 3 conversion for the early-migrations logic
here was incorrect, and as a result we never set
need_create_large_indexes to True (because we were checking whether a
`bytes` was inside a list of `str`s).
The simplest fix would be to just add a `.decode()` in one place, but
this refactor to just decode at the beginning is a lot more readable.
The is_private flag is intended to be set if recipient type is
'private'(1) or 'huddle'(3), otherwise i.e if it is 'stream'(2), it
should be unset.
This commit adds a database index for the is_private flag (which we'll
need to use it). That index is used to reset the flag if it was
already set. The already set flags were due to a previous removal of
is_me_message flag for which the values were not cleared out.
For now, the is_private flag is always 0 since the really hard part of
this migration is clearing the unspecified previous state; future
commits will fully implement it actually doing something.
History: Migration rewritten significantly by tabbott to ensure it
runs in only 3 minutes on chat.zulip.org. A key detail in making that
work was to ensure that we use the new index for the queries to find
rows to update (which currently requires the `order_by` and `limit`
clauses).
This package is important in order to avoid scary-looking errors
whenever we upgrade the dependencies in thumbor.txt (where
virtualenv-clone isn't installed in the venv, and then gets installed
by the code we just added a TODO comment to.
Apparently, perl at least expects LANG, LANGUAGE, and LC_ALL to be
consistent, and thus apt spits out a bunch of warnings if these are
different. So if we're forcing LC_ALL in these installer/upgrade
script blocks, we should force the rest too.
I believe this fixes the remaining locale part of #9946.
--agree-tos is useful for the Docker environment, where we won't have
an interactive shell present for agreeing to the ToS.
--deploy-hook is also useful for the Docker environment; it makes it
possible to customize what deploy hook (if any) we pass into the
underlying cerbot command.
This migrates Zulip to use a dramatically better set of names and
aliases for our emoji set, defined in emoji_names.py (which is in turn
manually generated from our hand-curated CSV file).
This should significantly improve the experience of using Zulip's
emoji picker and emoji typeahead for finding what one is looking for.
We were already correctly including libssl-dev in Zulip's dependencies
in development environment provisioning, but (at least now) it's
needed to build certain Python packages like pycurl when building a
Zulip virtualenv in production. I haven't investigated why we didn't
need this on Ubuntu, but one possible reason would be that some other
library in our dependencies list happens to depend on it on Ubuntu.
We fix this by moving the dependency over to the shared
VENV_DEPENDENCIES list.
Fixes part of #9946.
Apparently, at least some Debian stretch systems don't have an
/etc/lsb-release, so the optimization that we did in
5d39a0f0fc broke our installer on
Debian.
We fix this, by falling back to calling the lsb_release command on
systems that don't have a faster way to do it.
Fixes part of #9946.
This commits adds the necessary puppet configuration and
installer/upgrade code for installing and managing the thumbor service
in production. This configuration is gated by the 'thumbor.pp'
manifest being enabled (which is not yet the default), and so this
commit should have no effect in a default Zulip production environment
(or in the long term, in any Zulip production server that isn't using
thumbor).
Credit for this effort is shared by @TigorC (who initiated the work on
this project), @joshland (who did a great deal of work on this and got
it working during PyCon 2017) and @adnrs96, who completed the work.
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This saves about 400ms when running clean-unused-caches, basically by
calling its sub-rountines by import (rather than
`subprocess.check_call()`). The performance optimization seems well worth it.
Fixes#9766.
This file looks like it's producing some kind of compilation of the
mobile strings, that the mobile app will somehow end up using --
especially as it refers to its output as a "resource file". In
reality, it compiles statistics to be included in the language-picker
UI in the web app. Give appropriate names to the identifiers so it's
less confusing.
This improves the performance of these operations, by saving a ~50ms
Python process startup. While not a major performance improvement, it
seems worth it, given how often these commands get run.
Fixes#9571.
On newer distros like Xenial, Stretch, etc., we were incorrectly not
installing the Python 3 version of the virtualenv package. This was
accidentally working because most base images with Python already have
this package too, but this was failing to install the right
dependencies in our Docker builds, requiring unnecessary manual code.
We fixed this some time ago for provision.py, but not for production.
The docker installer configuration incorrectly had has_appserver set
to 0; this meant that (A) the docker-zulip code needed to copy the
block of code in the installer for the `has_appserver` case into the
Dockerfile (unnecessarily), and (B) one couldn't use `install` from a
Git ref (because the static asset compiler didn't end up in the right
place).
It appears that docker-zulip tried to set this flag in their `install`
command line, but the construction inside `install` meant that didn't
work.
This fixes adding the Ubuntu repositories for Debian, as well as makes
sure that we install the debian-archive-keyring package on Debian,
which is only priority important (and thus might be missing).
It wasn't obvious reading this message that you can perfectly well
bring your own SSL/TLS certificate; unless you read quite a bit
between the lines where we say "could not find", or followed the link
to the detailed docs, the message sounded like you had to either use
--certbot or --self-signed-cert.
So, explicitly mention the BYO option. Because the "complete chain"
requirement is a bit tricky, don't try to give instructions for it
in this message; just refer the reader to the docs.
Also, drop the logic to identify which of the files is missing; it
certainly makes the code more complex, and I think even the error
message is actually clearer when it just gives the complete list of
required files -- it's much more likely that the reader doesn't know
what's required than that they do and have missed one, and even then
it's easy for them to look for themselves.
This fixes a bug where provision was failing since our most recent
upgrade to yarn/nvm/node.
It turns out my original fix was the correct fix, but to the wrong
third-party tool: nvm, not yarn, was the offender.
Apparently, new versions of yarn use the HOME environment variable to
figure out where to access their configuration, and sudo apparently
doesn't clear that variable, so install-node was being run with HOME
set to something under /home/vagrant (e.g.).
Fix this by just setting that environment variable correctly.
This replaces 250a036ff8, which
misdiagnosed the issue.
It appears that some change in yarn's versioning system means that
installing yarn itself ends up chowning its config directory
incorrectly to be owned by root, preventing `yarn install` from
working later.
node -> v8.9.4
yarn -> 1.5.1
nvm -> 0.33.8
Also updates a test in timerender.js which depends on time
provided by node which is now changed in newer release.
Some changes have been made in circeci script, we just create ~/.config
directory and chown it to circleci user so installing new version of yarn
does not cause any ci failure on circleci during provision.
This is the analog of 7b2c9223e7 for the
emoji cache; the only difference is that the existing code was working
correctly. It's still worth changing for improved robustness.
We saw issues with /srv/zulip_npm_cache being cleaned incorrectly by
this tool in production (more correctly, we noticed broken symlinks to
those directories, even from the current deployment). Print-debugging
showed that indeed older deployments were being ignored, because the
logic for `get_caches_in_use` was totally broken (this was sorta
masked because we also keep the last week's deployments).
The specific bug here turned out to be that we weren't passing the
`production` argument to generate_sha1sum_node_modules, but the
broader problem is that this logic isn't robust to changes in the
hashing algorithm.
Fix this by replacing the broken logic for trying to compute the
correct hash for that deployment with just checking the symlink inside
the deployment to let it self-report.
We can't easily do this same change for clean-venv-cache, because we
use multiple virtualenvs there. But a similar change could be useful
for the emoji cache as well.
Fixes#8116.
This is apparently installed by the perl package; I hadn't even known
it existed. We of course want to use the sha1sum command from
coreutils.
Fixes#8836.
This commit switches our emoji infrastructure to use 256 color indexed
64px spritesheets. Earlier we were using non-indexed 32px spritesheets
which were blurry on high dpi displays. These indexed spritesheets not
only provide a crispier display but are also smaller in size.
This commit also removes the `emoji-datasource` package as a dependency
as all the data is now sourced from individual datasource packages.
Fixes: #7862.
The installation isn't really complete here, and wasn't even when this
was the only success case; the instructions we're giving are for the
next step in the installation.
These instructions don't say what to do in an actual use case for this
option, but decent instructions there will require having a concrete
use case in front of us and designing the flow for it. At this stage,
just say where we are in the normal flow, and an admin who's chosen to
go off that flow can figure out how they want to vary it from there.
This flips the experimental `--express` option to be the default.
We retain the old behavior, where the script exits before
`initialize-database`, as an option `--no-init-db`; it might be useful
in e.g. a migration scenario (from a Zulip install elsewhere, or
another chat system) where the admin wants to set up the database
separately.
The install instructions are adjusted to match, getting shorter by two
steps and a bunch of words. I think this opens up opportunities to
refactor the text to simplify things further, too, but leaving that
for another commit.
Also tweak the "production" test suite to match.
Kind of unfortunate because the `sudo` interface for running a command
is objectively better -- a list of arguments, rather than a string to
be re-parsed by the shell. But some bare-bones machine images lack
`sudo`, so this makes things a bit more portable.
We do the following here:
* Remove libjasper-dev from THUMBOR_VENV_DEPENDENCIES.
Reason: This dependancy wasn't really needed by us for using
thumbor. It was a dependancy for using open-cv as Imaging Engine
in thumbor but we use PIL (Pillow now) as Imaging Engine.
* Add zlib1g-dev, libfreetype6-dev to THUMBOR_VENV_DEPENDENCIES.
Reason: These are dependancies of Pillow which are required for it
Pillow to function. Since we use Pillow in thumbor as Imaging Engine
we need these. Stuff before this didn't break because we also use
Pillow in development Environment and have these dependancies
installed from VENV_DEPENDENCIES as well.
We'll make this the normal behavior soon, once we're satisfied with
our arrangements for sending the admin straight to realm creation and
using the app without configuring email. The instructions in the docs
will also have to change accordingly, of course.
This causes us to give an error if you pass the installer any
positional arguments, e.g. with `--`. There's no reason you'd want
to do this, but I accidentally did it by passing an extra `--` to
the `test-install/install` wrapper and spent a few minutes on
confused debugging.
This gives us just one way of adopting a self-signed cert, rather than
one script which would generate a new one and an option to another
which would symlink to the system's snakeoil cert. Now those two
codepaths converge, and do the same thing.
The small advantage of generating our own over the alternative is that
it lets us set the name in the cert to EXTERNAL_HOST, rather than the
system's hostname as embedded in the system snakeoil certs. Not a big
deal, but might make things go slightly smoother if some browsers are
lenient (in a way that they probably shouldn't be.)
Before this fix, the installer has an extremely annoying bug where
when run inside a container with `lxc-attach`, when the installer
finishes, the `lxc-attach` just hangs and doesn't respond even to
C-c or C-z. The only way to get the terminal back is to root around
from some other terminal to find the PID and kill it; then run
something like `stty sane` to fix the messed-up terminal settings
left behind.
After bisecting pieces of the install script to locate which step
was causing the issue, it comes down to the `service camo restart`.
The comment here indicates that we knew about an annoying bug here
years ago, and just swept it under the rug by skipping this step
when in Travis. >_<
The issue can be reproduced by running simply `service camo restart`
under `lxc-attach` instead of the installer; or `service camo start`,
following a `service camo stop`. If `lxc-attach` is used to get an
interactive shell, these commands appear to work fine; but then when
that shell exits, the same hang appears. So, when we start camo
we're evidently leaving some kind of mess that entangles the daemon
with our shell.
Looking at the camo initscript where it starts the daemon, there's
not much code, and one flag jumps out as suspicious:
start-stop-daemon --start --quiet --pidfile $PIDFILE -bm \
--exec $DAEMON --no-close -c nobody --test > /dev/null 2>&1 \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE -bm \
--no-close -c nobody --exec $DAEMON -- \
$DAEMON_ARGS >> /var/log/camo/camo.log 2>&1 \
|| return 2
What does `--no-close` do?
-C, --no-close
Do not close any file descriptor when forcing the daemon
into the background (since version 1.16.5). Used for
debugging purposes to see the process output, or to
redirect file descriptors to log the process output.
And in fact, looking in /proc/PID/fd while a hang is happening finds
that fd 0 on the camo daemon process, aka stdin, is connected to our
terminal.
So, stop that by denying the initscript our stdin in the first place.
This fixes the problem.
The Debian maintainer turns out to be "Zulip Debian Packaging Team",
at debian@zulip.com; so this package and its bugs are basically ours.
This provides a major simplification for non-production installs,
including our own testing (it's already in both the test-install
harness script and the "production" test suite) as well as potential
admins evaluating Zulip.
Ultimately this should probably be the default behavior, with perhaps
something shown to admins on the web as a reminder and link to help on
installing a better certificate. For now, pending working through
that, just get the behavior in and leave it opt-in.
The third-party `install-yarn.sh` script uses `curl`, and we invoke it
in `install-node`. So we need to install it as a dependency.
We've mostly gotten away with this because it's common for `curl` to
already be installed; but it isn't always.
Apparently, this was checking the wrong path in Travis CI, and thus
never actually running (meaning we'd accumulate every `node_modules`
directory ever in the Travis caches, which in turn resulted in very
slow builds).
This updates commit 11ab545f3 "install: Set the locale ..."
to be somewhat cleaner, and to explain more in the commit message.
In some environments, either pip itself fails or some packages fail to
install, and setting the locale to en_US.UTF-8 resolves the issue.
We heard reports of this kind of behavior with at least two different
sets of symptoms, with 1.7.0 or its release candidates:
https://chat.zulip.org/#narrow/stream/general/subject/Trusty.201.2E7.20Upgrade/near/302214https://chat.zulip.org/#narrow/stream/production.20help/subject/1.2E6.20to.201.2E7/near/306250
In all reported cases, commit 11ab545f3 or equivalent fixed the issue.
Setting LC_CTYPE is redundant when also setting LC_ALL, because LC_ALL
overrides all `LC_*` environment variables; so skip that. Also move
the line in `install` to a more appropriate spot, and adjust the
comments.
This commit renames various source requirements files like `dev.txt`,
`mypy.txt` etc to `dev.in`, `mypy.in` etc and various locked requirements
files like `dev_lock.txt`, `mypy_lock.txt` etc to `dev.txt`, `mypy.txt`
etc. This will help in emphasizing to the user that *.in are actually
input to `update-locked-requirements` tool which should be run after
updating any of these.
In this commit we add new dependencies needed for running thumbor.
Also we add the script for creating the virtual environment ready
for thumbor.
Note: Thumbor will use python2 and thus have different virtualenv
dedicated to it.
Credits to @TigorC and @joshland as well for there work on this.
This allows the installer to continue using this script for the
`standalone` method, while the no-argument form now uses the same
`webroot` method as the renewal cron job, suitable for running
by hand to adopt Certbot after initial install.
Certbot replaces the cert files under /etc/letsencrypt/live/,
which our nginx config refers to symlinks to; but it doesn't
tell nginx there's been an update, so nginx keeps serving the
old cert.
This is fine as long as nginx is restarted, or just told to
reload its config, at some point before the cert actually
expires about 30 days later. Which is probably the common
case, but of course we should make it just work. So, if we
actually renew a cert, tell nginx to reload its config now.
This causes the cron job to run only when a Zulip-managed certbot
install is actually set up.
Inside `install`, zulip.conf doesn't yet exist when we run
setup-certbot, so we write the setting later. But we also give
setup-certbot the ability to write the setting itself, so that we
can recommend it in instructions for adopting certbot in an
existing Zulip installation.
Except in:
- docs/writing-bots-guide.md, because bots are supposed to be Python 2
compatible
- puppet/zulip_ops/files/zulip-ec2-configure-interfaces, because this
script is still on python2.7
- tools/lint
- tools/linter_lib
- tools/lister.py
For the latter two, because they might be yanked away to a separate repo
for general use with other FLOSS projects.
This didn't work at all when one did a `vagrant destroy` and then
`vagrant up`, because the cache state would be preserved even though
the machine is gone.
Fixes#5981.
This should make it easier to script the installation process, and
also conveniently are the options one would want for the --certbot
option.
Significantly modified by tabbott to have a sane right interface,
include --help, and avoid printing all the `set -x` garbage before the
usage notices.
Based on #450, with commits
restructured by Rein Zustand.
Tweaks by Rein Zustand:
- Replace configure-cert with generate-self-signed-certs
- `mv scripts/lib/create-zulip-admin.sh scripts/lib/create-zulip-admin`
We were checking for whether an item in the deployments directory
represents a directory but were using its relative path which was
causing a false value to be returned for all items irrespective of
their being a directory or not if the script was invoked from some
where other than the deployments directory.
This commit re-arranges the arguments of `purge_unused_caches()`
function in order to remain consistent with other similar functions
in the library like `may_be_perform_caching()`.
This function will replace the repetitive definition of `parse_args()`
in various cache cleaning scripts. Also adds a `--verbose` argument
to the parser.
Historically, one has needed to build a release tarball in order to
use/test the Zulip installer, but you could upgrade a Zulip server
from Git. However, the only reason for that requirement was that we
didn't run `tools/update-prod-static` as part of the install script if
it's required. A good test for that case is whether we're in a Git
repository, but a better one is to check whether the prod-static
content exists in the tarball paths.
Fixes#3704.
This enforces our use of a consistent style in how we access Python
modules; "from os.path import dirname" is a particularly popular
abbreviation inconsistent with our style, and so it deserves a lint
rule.
Commit message and error text tweaked by tabbott.
Fixes#6543.
Based on the `dry_run` flag, this function either purges the list
of directories passed to them or prints a listing of the directories
it would have purged/kept_back, had the `dry_run` flag been false.
Apparently, the refactoring to make this script only run when changes
are present was buggy, in that if `apt-get update` failed, running
provision against wouldn't rerun `apt-get update`, resulting in a
broken state that requires expertise to fix. This closes that gap, by
using a stamp file to ensure we always successfully update apt before
proceeding.
It doesn't fix existing installations.
Modify `generate_sha1sum_node_modules()` such that it can calculate
the hash for a particular installation.
Tweaked by tabbott to use os.path.realpath in the setup_dir
calculation, to ensure it's consistent.
This should make it much more likely that users see this before
waiting a long time for other things to happen, since the `apt-get
dist-upgrade` step is really slow. We can't move further to the top,
since this requires `lsb_release` to be installed.
Given the path of directory containing all the caches, a list of
caches in use and threshold days, this function returns a list
of caches which can be removed safely.
This function returns a list of all the deployments directories
which are newer than some threshold number of days including the
`/root/zulip` directory if it exists.
This saves us from spending 200-250ms of CPU time importing Django
again just to log that we're running a management command. On
`scripts/restart-server`, this saves us from one thundering herd of
Django startups when all the queue workers are restarted; but there's
still the Django startup for the `manage.py` process itself for each
worker, so on a machine with e.g. 2 (virtual) cores the restart is
still painful.
This causes `upgrade-zulip-from-git`, as well as a no-option run of
`tools/build-release-tarball`, to produce a Zulip install running
Python 3, rather than Python 2. In particular this means that the
virtualenv we create, in which all application code runs, is Python 3.
One shebang line, on `zulip-ec2-configure-interfaces`, explicitly
keeps Python 2, and at least one external ops script, `wal-e`, also
still runs on Python 2. See discussion on the respective previous
commits that made those explicit. There may also be some other
third-party scripts we use, outside of this source tree and running
outside our virtualenv, that still run on Python 2.
We now call the create_large_migrations management command as part of
upgrade-zulip-stage-2 if needed, so that we can create large indexes
while the app is still up.
We can't fully support it until we fix the tsearch_extras availability
issue, but for now, this is an improvement.
Tweaked by tabbott to cover the outstanding tsearch_extras issue.
Also make our dependency on `six` (for e.g. `replace-tarball-shebang`)
explicit -- we've been getting it via `python-pip`, but `python3-pip`
(on trusty) doesn't have that dependency for some reason.
Since we can use both perfer_offline=True and False in a since build
prefer_offline shouldn't be used as a cache key or it will confuse the
cleanup script. Since yarn install (if successful) should be idempotent.
This will probably be ok.
If we do wind up with a symlink lying around at `local_settings.py`,
it won't do us any harm and shouldn't be materially more confusing
than the regular file we've long had there for almost all installs.
It'll also only last as long as the current deploy. So just
let it be, and simplify the code a bit.
Also add a line to help the reader understand the remaining half of
this logic (which is essential so long as people might have pre-1.4.0
deploys lying around that they eventually get around to trying to
upgrade). The fact that it's addressed to a situation which exists
only in the past of this tree, not in its present, makes a brief
comment potentially very helpful.
This replaces nvm in npm-wrapper by harcoding the path the way we do
with node. The main benefit is that this saves a few hundred
milliseconds every time we invoke npm.
For performance reasons, we spawn each linter in a separate OS thread.
The downside of this is that all lints would end up in stdout without
much visual separation, resulting in confusing error log. This commit
introduce the `print_err` function, which shows which linter each line
of lint is from.
Basically we just seperate out the sha1sum generation for the
node modules so that it can be reused later for cache clearance
logic. This is achieved by adding a function which returns the
sha1sum based HEX digest.
The Zulip email mirror script called by postfix had performance/load
issues, because it spent so much time on startup/import due to use of
the Zulip virtualenv.
The script was rewritten using pure python (no Django) to improve
performance.
The install script was failing on 2nd+ attempts if the first attempt
was interrupted.
This failure happened because zulip-venv already existed at
`current_venv_path`. Changing the `ln` command's flags from `-s` to
`-nsf` should make this part of the script idempotent.
This fixes a significant performance issue with LaTeX rendering (and
other things that invoked node) where starting up node took a few
hundred milliseconds due to nvm initialization.
Tweaked by tabbott to avoid copying the node binary itself, instead
using a tiny wrapper script.
This is important primarily because it's possible a future version of
node will expect to find libraries/dependencies/etc. installed via NVM
at some path related to the path of the node binary itself, and that's
more guaranteed with this new model.
Fixes#4618.
This fixes a performance problem where we were previously starting up
a full Django process (~0.7s even on a fast machine) every time a new
email came in, potentially allowing users to accidentally DoS a Zulip
server. Now, we just post over HTTPS, allowing the existing thread
pool support to do its job.
- Add script wrapper to communicate postfix pipe with django web server
over HTTP(S). It uses shared_secret authentication mode.
- Add django view to process messages from email mirror server.
- Clean management command `email-mirror`. Left just functional
for cron email processing.
- Add routes for new tornado view.
- Change pipe script in master process postfix config template
based on updated script.
- Add tests.
Tweaked by tabbott to adjust the directory and set better defaults.
Fixes#2421.
Follow-on from #2373/ PR https://github.com/zulip/zulip/pull/4316, to set an
appropriate umask also when upgrading so files have appropriate permissions.
I've tested this by starting from a clean install, deleting /srv/* so new
files are downloaded, and then doing an upgrade. It worked starting with both
a current version from master and an older release installed with a less
restrictive umask and then the umask changed.
Fixes#2373.
* Now queue_workers.py sorts queue names and prints them on their own
line. Previously it's output was nondeterministic.
* Simplified grep strategy for removing the "test" worker.
This list was likely to end up out of date quickly, since it wasn't
documented that you need to update it when adding a queue. The best
solution is to just not require it to be updated.
Now that we no longer use node_modules at all in production (it's only
used to generate static assets), we don't include `node_modules` in
the production tarballs, and thus we shouldn't attempt to copy
`node_modules` out of the production tarballs when installing.
Fixes a regression introduced in
d71f2e7b9b.
This saves about a minute of downtime when using
upgrade-zulip-from-git in the default configuration.
It should also save several seconds of downtime when upgrading to a
production release tarball as well.
This indirectly causes the RabbitMQ node name for new Zulip
installations to default to zulip@localhost, which would eliminate the
persistent problems we have had
Fixes#194, #465, #1375, #1751.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This adds a dependency on the realpath package on trusty; we could try
to remove it if needed, but given that realpath is included in
coreutils on Xenial (and presumably anything else modern), I think
it's reasonable to add it.
Fixes#1797.
Previously, success_stamp was touched whenever we used a particular
node_modules version; it makes more sense to only touch it when the
node_modules directory has actually changed.
get_package_names did not correctly strip the GitHub URLs from package
names, resulting in the "package names" for our dependencies installed
from Git being tracked with the complete sha1sum included in the name.
This meant that upgrading our virtualenvs incorrectly ended up
resorting to creating an entirely new virtualenv whenever we changed a
dependency that had previously been installed from GitHub URLs.
generate-secrets.py now requires --development for development environment
setup or --production for production environment setup (and one of these
options is mandatory).
This solves the problem that it was somewhat easy to accidentally run
generate-secrets.py without the `-d` option while doing manual development
environment setup.
Fixes: #1911.
This is a first pass at building a framework for collecting various
stats about realms, users, streams, etc. Includes:
* New analytics tables for storing counts data
* Raw SQL queries for pulling data from zerver/models.py tables
* Aggregation functions for aggregating hourly stats into daily stats, and
aggregating user/stream level stats into realm level stats
* A management command for pulling the data
Note that counts.py was added to the linter exclude list due to errors
around %%s.
This adds a new system for copying packages from old virtualenvs that
are sufficiently similar to the new virtualenv required.
In practice, this results in a huge performance improvement for
re-provisioning Zulip development environments when the requirements
files have changed (which is the dominant performance problem with
provision today).
Fixes: #1507.
Between releases 1.3.13 and 1.4.0, local_settings.py was renamed to
prod_settings.py. The upgrade scripts were adjusted to reflect this name
change. But because the first part of the upgrade script is run with the
currently installed version's code, the symlink to /etc/zulip/settings.py is
created with the old name. This was causing upgrade-zulip-stage-2 to fail.
Now upgrade-zulip-stage-2 creates the symlink at zproject/prod_settings.py
if it doesn't already exist.
Fixes#1731.
Because rabbitmq doesn't support changing the nodename of a running
rabbitmq node, Zulip installations suffered a plague of issues where
e.g. a Zulip server would reboot, the hostname would change, and
suddenly the local rabbitmq instance being used by Zulip would stop
working.
We address this problem by using, by default, a fixed rabbitmq
nodename, but providing server administrators the option to set the
rabbitmq nodename used by Zulip however they choose.
To upgrade an existing server to use this new configuration, one will
need to add something like the following to /etc/zulip/zulip.conf:
[rabbitmq]
nodename = zulip@localhost
However, I don't believe we have the puppet code in place to make this
work correctly at initial installation without rabbitmq-server being
already installed (but off), as we can easily setup in Travis CI but I
haven't been willing to do for the installer. So for now, this just
fixes our Travis CI problems.
Fixes: #1579.
This reverts commit 3f95e567c1.
Apparently `apt-add-repository` fails periodically in CI. I suspect
this is some sort of silly networking problem, but given that all
we're saving is a few lines of code, the old version was better if
this fails basically ever.
Previously, the install script would fail if you passed various
non-default puppet rules, since the code to configure and restart
services that runs later on in the install script largely ran
unconditionally, regardless of whether the relevant service was
actually installed on the target system.
This should make the main install script reusable for installing
e.g. a dedicated Postgres server for use with Zulip.
This reverts commit f1f48f305e.
The use of sklearn unfortunately caused a substantial slowdown to the
Zulip provisioning process, which didn't seem worth it for a
relatively minor feature.
In python 3, subprocess uses bytes for input and output if
universal_newlines=False (the default). It uses str for input and
output if universal_newlines=True.
Since we're dealing with strings here, add universal_newlines=True
to subprocess.check_output calls.
This is important for both ensuring the Nagios checks work correctly
in production, as well as making sure the `zulip` user can access the
virtualenv (owned by the `travis` user) in Travis CI.
The manage.py change effectively switches the Zulip production server
to use the virtualenv, since all of our supervisord commands for the
various Python services go through manage.py.
Additionally, this migrates the production scripts and Nagios plugins
to use the virtualenv as well.
Apparently, c74a74dc74 introduced a bug
where we are no longer correctly depending on build-essential as part
of the Zulip development environment installation process.
Fixes#1111.
This is needed because hash_reqs.py is used to create a virtualenv.
Currently we only use virtualenv in development, but we will soon
start using it in production. Scripts used in production should be
put in scripts/.
Camo is a caching image proxy, used in Zulip to avoid mixed-content
warnings by proxying HTTP image content over HTTPS. We've been using
it in zulip.com production for years; this change makes it available
in standalone Zulip deployments.
The main function of prompting inside `manage.py migrate` is to ask
the user if they want to delete stale content-types, which is
unimportant and likely scary, so we disable doing so.
This automatically loads settings, zerver.models.* and
zerver.lib.actions.* when you start `manage.py shell`, which should
save a bit of time basically every time someone uses it.
Fixes#275.
A common issue when doing a Zulip upgrade is trying to pass
upgrade-zulip a tarball path under /root, which doesn't work because
the Zulip user doesn't have permission to read the tarball. We
could fix this by just unpacking the tarballs as root, but it seemed
like a nicer approach would be to archive the release tarballs
somewhere readable by the Zulip user (/home/zulip/archives) and unpack
them from there.
Fixes#208.
The point of the lock is to prevent two deployments happening at the
same time and racing with each other, not to prevent doing any future
deployments after an error happens (which is what the current
implementation does in practice).
Addresses part of #208.
The #! line processing interpreted the argument to pass to `env` as
"python2.7 -u", which obviously isn't a real program.
We fix this by setting the PYTHONUNBUFFERED environment variable
inside the program, which has the same effect.
Thanks to Dan Fedele for the bug report and suggested solution!
With this change, we are now testing the production static asset
pipeline and installation process in a new testing job (and also run
the frontend/backend tests separately).
This means that changes that break the Zulip static asset pipeline or
production installation process are more likely to fail tests. The
testing is imperfect in that it does not have proper isolation -- we
build a complete Zulip development environment and then install a
Zulip production environment on top of it, so e.g. any apt
dependencies installed for Zulip development will still be available
for the Zulip production environment. But, it's better than nothing!
A good v2 of this would be to have the production setup process just
install the minimum stuff needed to run `build-release-tarball` and
then uninstall it / clean it up so that we can do a more clear
production installation, but that's more work.
While the docu on https://www.zulip.org/server.html says:
```
cd /root/zulip
./scripts/setup/install
```
This script downloads the `python-django-guardian_1.3-1~zulip4_all.deb` file to current working dir (`/root/zulip` if you follow the docu), but tries to install it from /root/.
This fails obviously. So i changed the download location to /tmp/.
We don't use apache in the main app -- only for the SSO situation --
this code was just copied from our own install script. And it caused
problems at CUSTOMER13 because they installed Apache in preparation for
the SSO integration, but restarting it failed.
(imported from commit 3f2961574134847c836e8b69736f60d9f8790201)