Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
This commit is somewhat ugly, but its purpose is to be early
preparation for splitting Tornado into a queue server and a frontend
server, and this code belongs, by and large, in the queue server
component.
89a2765553 didn't include the database
migration corresponding to the change, which means it didn't take full
effect when it was merged.
I noticed this because `manage.py makemigrations` would generate these
migrations; that suggests a good idea for a test to add.
This solves the problem reported in #331 with needing to specify
--provider=lxc to use the LXC provider in an Ubuntu Linux environment;
additionally, it adds the LXC option needed to run LXC on Ubuntu
15.10, but not on 14.04 where that option is unavailable and would
totally break LXC.
It's possible we should just eliminate this mechanism, but this fixes
a proximal problem where the multi-line get_subscribers endpoint
description was being handled wrong.
Previously these were hardcoded in zproject/settings.py to be accessed
on localhost.
[Modified by Tim Abbott to adjust comments and fix configure-rabbitmq]
Travis CI's model of installing every version of postgres on the test
VM and then shutting all the versions other than the one requested
down seems to not work very well with doing apt upgrades. It seems
the best way to resolve this is to just uninstall the versions we
don't need.
A common issue when doing a Zulip upgrade is trying to pass
upgrade-zulip a tarball path under /root, which doesn't work because
the Zulip user doesn't have permission to read the tarball. We
could fix this by just unpacking the tarballs as root, but it seemed
like a nicer approach would be to archive the release tarballs
somewhere readable by the Zulip user (/home/zulip/archives) and unpack
them from there.
Fixes#208.
The point of the lock is to prevent two deployments happening at the
same time and racing with each other, not to prevent doing any future
deployments after an error happens (which is what the current
implementation does in practice).
Addresses part of #208.
This link was broken when we hardened the access model for user file
uploads to not work cross-realm. The right solution is just to
include the image in the codebase so it's guaranteed to exist.
Fixes#205.