2018-12-18 02:08:53 +01:00
|
|
|
#!/usr/bin/env bash
|
2018-02-09 01:14:38 +01:00
|
|
|
set -e
|
2018-01-22 21:09:07 +01:00
|
|
|
|
|
|
|
usage() {
|
2018-01-22 23:28:45 +01:00
|
|
|
echo "usage: install -r RELEASE {TARBALL|DIR} [...installer opts..]" >&2
|
2018-01-22 21:09:07 +01:00
|
|
|
exit 1
|
|
|
|
}
|
|
|
|
|
|
|
|
args="$(getopt -o +r: --long help,release: -- "$@")"
|
|
|
|
eval "set -- $args"
|
|
|
|
while true; do
|
|
|
|
case "$1" in
|
|
|
|
--help) usage;;
|
|
|
|
-r|--release) RELEASE="$2"; shift; shift;;
|
|
|
|
--) shift; break;;
|
|
|
|
*) usage;;
|
|
|
|
esac
|
|
|
|
done
|
2019-02-01 01:15:51 +01:00
|
|
|
INSTALLER="$1"; shift || usage
|
2018-01-22 21:11:34 +01:00
|
|
|
INSTALLER_ARGS=("$@"); set --
|
2018-01-22 21:09:07 +01:00
|
|
|
|
|
|
|
if [ -z "$RELEASE" ] || [ -z "$INSTALLER" ]; then
|
|
|
|
usage
|
|
|
|
fi
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
|
|
|
|
if [ "$EUID" -ne 0 ]; then
|
|
|
|
echo "error: this script must be run as root" >&2
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
|
2018-02-09 01:14:38 +01:00
|
|
|
set -x
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
|
|
|
|
THIS_DIR="$(dirname "$(readlink -f "$0")")"
|
|
|
|
|
|
|
|
BASE_CONTAINER_NAME=zulip-install-"$RELEASE"-base
|
|
|
|
if ! lxc-info -n "$BASE_CONTAINER_NAME" >/dev/null 2>&1; then
|
|
|
|
"$THIS_DIR"/prepare-base "$RELEASE"
|
|
|
|
fi
|
|
|
|
|
|
|
|
while [ -z "$CONTAINER_NAME" ] || lxc-info -n "$CONTAINER_NAME" >/dev/null 2>&1; do
|
2018-01-24 23:00:50 +01:00
|
|
|
shared_dir="$(mktemp -d --tmpdir "$RELEASE"-XXXXX)"
|
|
|
|
CONTAINER_NAME=zulip-install-"$(basename "$shared_dir")"
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
done
|
|
|
|
|
test-install: Give the host a direct view of the guest's /tmp/src/.
(This is a small fixup to the main change, which was accidentally
included in a previous commit:
08bbd7e61 "settings: Slightly simplify EMAIL_BACKEND logic."
Oops. See there for most of the changes described here.)
The installer works out of a release-tarball tree. We typically want
to share this tree between successive test-install runs (with an rsync
or similar command to update source files of interest) because
rebuilding a release tree from scratch is slow. But the installer
will munge the tree; so instead of directly bind-mounting the tree
into the container, we need to give it an overlay over the tree, as a
sandbox to play in.
Previously we used lxc-copy's `-m overlay=...` feature to do this,
mounting an overlay in the container. But then sometimes in
development we want to reach in and edit some code in the tree,
e.g. before rerunning the installer after something failed. Reaching
inside the container for this is a pain (`ssh` would add latency, and
I haven't installed sshd in the containers; and getting rsync to work
with `lxc-attach` was beyond what I could figure out in a few minutes
of fiddling); and editing the base tree often doesn't work.
So, create the overlay with our own `mount -t overlay`, and have
`lxc-copy` just bind-mount that in. Now the host has direct access to
the same overlay which the guest is working from.
Also this makes it past time to help the user out in finding the fresh
names we've created: first the container, now this shared tree. Print
those at the end, rather than make the user scroll to the top and find
the right `set -x` line to copy-paste from.
2018-01-26 21:35:34 +01:00
|
|
|
message="$(cat <<EOF
|
2018-01-24 23:00:50 +01:00
|
|
|
|
test-install: Give the host a direct view of the guest's /tmp/src/.
(This is a small fixup to the main change, which was accidentally
included in a previous commit:
08bbd7e61 "settings: Slightly simplify EMAIL_BACKEND logic."
Oops. See there for most of the changes described here.)
The installer works out of a release-tarball tree. We typically want
to share this tree between successive test-install runs (with an rsync
or similar command to update source files of interest) because
rebuilding a release tree from scratch is slow. But the installer
will munge the tree; so instead of directly bind-mounting the tree
into the container, we need to give it an overlay over the tree, as a
sandbox to play in.
Previously we used lxc-copy's `-m overlay=...` feature to do this,
mounting an overlay in the container. But then sometimes in
development we want to reach in and edit some code in the tree,
e.g. before rerunning the installer after something failed. Reaching
inside the container for this is a pain (`ssh` would add latency, and
I haven't installed sshd in the containers; and getting rsync to work
with `lxc-attach` was beyond what I could figure out in a few minutes
of fiddling); and editing the base tree often doesn't work.
So, create the overlay with our own `mount -t overlay`, and have
`lxc-copy` just bind-mount that in. Now the host has direct access to
the same overlay which the guest is working from.
Also this makes it past time to help the user out in finding the fresh
names we've created: first the container, now this shared tree. Print
those at the end, rather than make the user scroll to the top and find
the right `set -x` line to copy-paste from.
2018-01-26 21:35:34 +01:00
|
|
|
Container:
|
|
|
|
sudo lxc-attach -n $CONTAINER_NAME
|
|
|
|
|
|
|
|
Unpacked tree:
|
|
|
|
sudo ls $shared_dir/mnt/zulip-server
|
2018-01-24 23:00:50 +01:00
|
|
|
EOF
|
test-install: Give the host a direct view of the guest's /tmp/src/.
(This is a small fixup to the main change, which was accidentally
included in a previous commit:
08bbd7e61 "settings: Slightly simplify EMAIL_BACKEND logic."
Oops. See there for most of the changes described here.)
The installer works out of a release-tarball tree. We typically want
to share this tree between successive test-install runs (with an rsync
or similar command to update source files of interest) because
rebuilding a release tree from scratch is slow. But the installer
will munge the tree; so instead of directly bind-mounting the tree
into the container, we need to give it an overlay over the tree, as a
sandbox to play in.
Previously we used lxc-copy's `-m overlay=...` feature to do this,
mounting an overlay in the container. But then sometimes in
development we want to reach in and edit some code in the tree,
e.g. before rerunning the installer after something failed. Reaching
inside the container for this is a pain (`ssh` would add latency, and
I haven't installed sshd in the containers; and getting rsync to work
with `lxc-attach` was beyond what I could figure out in a few minutes
of fiddling); and editing the base tree often doesn't work.
So, create the overlay with our own `mount -t overlay`, and have
`lxc-copy` just bind-mount that in. Now the host has direct access to
the same overlay which the guest is working from.
Also this makes it past time to help the user out in finding the fresh
names we've created: first the container, now this shared tree. Print
those at the end, rather than make the user scroll to the top and find
the right `set -x` line to copy-paste from.
2018-01-26 21:35:34 +01:00
|
|
|
)"
|
|
|
|
trap 'set +x; echo "$message"' EXIT
|
2018-01-24 23:00:50 +01:00
|
|
|
|
2018-01-22 23:28:45 +01:00
|
|
|
if [ -d "$INSTALLER" ]; then
|
2018-08-03 02:14:51 +02:00
|
|
|
installer_dir="$(readlink -f "$INSTALLER")"
|
2018-01-22 23:28:45 +01:00
|
|
|
else
|
|
|
|
installer_dir="$(mktemp -d --tmpdir zulip-server-XXXXX)"
|
2018-01-23 01:28:50 +01:00
|
|
|
tar -xf "$INSTALLER" -C "$installer_dir" --transform='s,^[^/]*,zulip-server,'
|
2018-01-22 23:28:45 +01:00
|
|
|
fi
|
|
|
|
|
2018-01-23 01:10:52 +01:00
|
|
|
mkdir -p /srv/zulip/test-install/pip-cache
|
|
|
|
|
2018-01-24 23:00:50 +01:00
|
|
|
mkdir "$shared_dir"/upper "$shared_dir"/work "$shared_dir"/mnt
|
|
|
|
mount -t overlay overlay \
|
|
|
|
-o lowerdir="$installer_dir",upperdir="$shared_dir"/upper,workdir="$shared_dir"/work \
|
|
|
|
"$shared_dir"/mnt
|
|
|
|
|
2018-01-22 23:28:45 +01:00
|
|
|
lxc-copy --ephemeral --keepdata -n "$BASE_CONTAINER_NAME" -N "$CONTAINER_NAME" \
|
2019-01-15 02:59:21 +01:00
|
|
|
-m bind="$shared_dir"/mnt:/mnt/src/,bind=/srv/zulip/test-install/pip-cache:/root/.cache/pip
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
|
2018-02-09 01:14:38 +01:00
|
|
|
"$THIS_DIR"/lxc-wait -n "$CONTAINER_NAME"
|
|
|
|
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
run() {
|
|
|
|
lxc-attach -n "$CONTAINER_NAME" -- "$@"
|
|
|
|
}
|
|
|
|
|
2019-01-15 02:59:21 +01:00
|
|
|
run eatmydata -- /mnt/src/zulip-server/scripts/setup/install --self-signed-cert "${INSTALLER_ARGS[@]}"
|
install: Start on an LXC-based dev/test environment for the installer.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
2018-01-20 01:14:40 +01:00
|
|
|
|
|
|
|
# TODO settings.py, initialize-database, create realm
|