This works around a frequent problem we've been having with Travis
CI's apt repository being busted on startup. This, in turn, caused
`apt-get install moreutils` to fail.
We attempt to fix this by adding our own retry logic.
Also add a comment explaining why the order in this file matters.
It's not obvious that it should, so it's otherwise tempting
to reorder them to optimize only for readability.
This commit fixes the order in which we run the test suites in Travis
CI. Most of the PRs include changes only to backend/frontend and it is
fairly rare for them to break production test suite. Since travis runs
atmost four jobs at a time, most of the time developers need to wait for
the production suite to get completed so that the backend test suite can
start which has a fairly large chances of being broken. Now, we run the
production test suite in last.
Apparently, when we implemented the emoji cache for development, we
failed to properly register its directory for Travis CI.
Commit message rewritten by tabbott.
Apparently, when we migrated local development to use the
zulip-npm-cache structure, we failed to update the .travis.yml file
with the appropriate directory (and instead just uploaded a
node_modules symlink).
(Most of this work was done by acrefoot in an earlier branch.
I took over the branch to fix casper tests that were broken during
the upgrade (which were fixed in a different commit). I also
made most of the changes to run-casper.)
This also upgrades phantomjs to 2.1.7.
The huge structural change here is that we no longer vendor casperjs
or download phantomjs with our own script. Instead, we just use
casperjs and phantomjs from npm, via package.json.
Another thing that we do now is run casperjs tests individually, so
that we don't get strange test flakes from test interactions. (Tests
can still influence each other in terms of changing data, since we
don't yet have code to clear the test database in between tests.)
A lot of this diff is just removing files and obsolete configurations.
The main new piece is in package.json, which causes npm to install the
new version.
Also, run-casper now runs files individually, as mentioned above.
We had vendored casperjs in the past. I didn't bring over any of our
changes. Some of the changes were performance-related (primarily
5fd58cf249), so the upgraded version may
be slower in some instances. (I didn't do much measurement of that,
since most of our slowness when running tests is about the setup
environment, not casper itself.) Any bug fixes that we may have
implemented in the past were either magically fixed by changes to
casper itself or by improvements we have made in the tests themselves
over the years.
Tim tested the Casper suite on his machine and running the full Casper
test suite is faster than it was before this change (1m30 vs. 1m50),
so we're at least not regressing overall performance.
You need to define the following settings in Travis:
- ARTIFACTS_KEY
- ARTIFACTS_SECRET
- ARTIFACTS_BUCKET
- ARTIFACTS_REGION (if it's other than standard)
Since we are uploading everything to S3 under `var/casper` this commit
will also upload the server.log to S3.
Fixes: #1263, Fixes: #1477
NVM takes a specific node version and installs the node package and
a corresponding compatible npm package.
We use it in a somewhat hackish way to install node/npm globally with
a pinned version, since that's how we actually want to consume node in
our development environment.
Other details:
- Travis CI now is configured to use the version of node installed by
provision; the easiest way to do this was to sabotage the existing node
installation.
- jsdom is upgraded to a current version, which both requires recent
node and also is required for the tests to pass with recent node.
This fixes running the node tests on Xenial.
Fixes#1498.
[tweaked by tabbott]
We set the COVERAGE_FILE environment variable which controls the
output file path for the .coverage file produced by python-coverage,
and also move the mypy coverage file to that location as well.
The node packages 'jQuery' and 'jquery' are different--'jQuery' is the
legacy support package that is needed for Zulip so the require statements
in the tests were updated.
Travis uses node 4.0 by default and we are using 0.10, so the command to
install the correct version had to be added to the .travis.yml file.
This tests whether a new patch introduces any regressions related to
any of the Python 3 compatibility fixers we've run in the past, so
that we can make continuous forward progress on our path towards
Python 3 compatibility.
This produces error output that looks like this:
"""
Testing for additions of Python 2 patterns we've removed as part of moving towards Python 3 compatibility.
Running Python 3 compatibility test lib2to3.fixes.fix_apply
Running Python 3 compatibility test lib2to3.fixes.fix_except
diff --git a/zerver/views/__init__.py b/zerver/views/__init__.py
index b5c0102..2defd46 100644
--- a/zerver/views/__init__.py
+++ b/zerver/views/__init__.py
@@ -296,7 +296,7 @@ def accounts_register(request):
do_activate_user(user_profile)
do_change_password(user_profile, password)
do_change_full_name(user_profile, full_name)
- except UserProfile.DoesNotExist, e:
+ except UserProfile.DoesNotExist as e:
user_profile = do_create_user(email, password, realm, full_name, short_name,
prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
Python 3 compatibility error(s) detected! See diff above for what you need to change.
"""
With this change, we are now testing the production static asset
pipeline and installation process in a new testing job (and also run
the frontend/backend tests separately).
This means that changes that break the Zulip static asset pipeline or
production installation process are more likely to fail tests. The
testing is imperfect in that it does not have proper isolation -- we
build a complete Zulip development environment and then install a
Zulip production environment on top of it, so e.g. any apt
dependencies installed for Zulip development will still be available
for the Zulip production environment. But, it's better than nothing!
A good v2 of this would be to have the production setup process just
install the minimum stuff needed to run `build-release-tarball` and
then uninstall it / clean it up so that we can do a more clear
production installation, but that's more work.
Apparently it isn't supposed to work reliably with the container-based
infrastructure that we're using and empirically it's causing build
failures.
Thanks to @mijime for tracking this down.