In this commit we add a wrapper around the provisioning inside
setup-backend to trigger another try at provisioning if it fails
in hopes of it will run this time fine if it was interuppted by a
network issue the first time.
Fixes: #2034.
Before this commit, provisioning was done by executing provision.py,
which printed the log directly to stdout, making debugging harder.
This commit creates a wrapper bash script 'provision' in tools, which
calls 'zulip/scripts/tools/provision_vm.py' (the new location of
provision.py) and prints all the output to
'zulip/var/log/zulip/zulip_provision.log' via 'tee'.
Travis tests and docs have been modified accordingly.
The `with sh.sudo` pattern that we were using in python-sh was
deprecated, and emperically hangs on Ubuntu xenial. Since in general
the use of python-sh/python-pbs caused trouble (requiring extra
dependencies, confusing syntax), this just removes it.
We replace it with a new zulip_tools.py library function that echoes
the command line and streams the output.
We do the same to install-phantomjs so we can remove that dependency.
With this change, we are now testing the production static asset
pipeline and installation process in a new testing job (and also run
the frontend/backend tests separately).
This means that changes that break the Zulip static asset pipeline or
production installation process are more likely to fail tests. The
testing is imperfect in that it does not have proper isolation -- we
build a complete Zulip development environment and then install a
Zulip production environment on top of it, so e.g. any apt
dependencies installed for Zulip development will still be available
for the Zulip production environment. But, it's better than nothing!
A good v2 of this would be to have the production setup process just
install the minimum stuff needed to run `build-release-tarball` and
then uninstall it / clean it up so that we can do a more clear
production installation, but that's more work.