Sometimes it's useful to run two copies of test-backend at the same
time. The problem with doing so is that we need to make sure no two
threads are using the same test database ID.
Previously, this worked only if at most one of those copies was
running in the single-threaded mode, because we used a random database
ID for the single-threaded code path, but the same IDs counting from 0
for the parallel code path.
Fix this, mostly, by generating a random start for the range of IDs
used by the process, and then counting off database IDs starting from
there (both in the parallel and non-paralllel modes).
There's still a very low probability race, see the TODO.
Additionally, there appear to be some other races with running two
copies of test-backend at the same time not related to the database.
See https://github.com/zulip/zulip/issues/12426 for a follow-up issue
that's sorta created by this.
This optimizes test-backend by skipping webhook
tests when run in default mode.
Tweaked by tabbott to extend the documentation and update the CI
commands.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
In Django 1.11, args argument of run_subsuite function is a tuple of
length 4. The new argument is the test runner class which we do not
need because we run tests using our own version of the test runner.
We enable data_suffix option when creating Coverage instances which
causes the output files to include the hostname, pid, and random id.
Before each run erase is called which clears all existing coverage data
files. And then at the end of the test run use the combine method which
merges the reports.
We collect coverage in the main process which collects data from
imports and also when running in single process mode. In the workers we
collect coverage in run_subsuite. This creates more stats files than
strictly required but I don't see a better place to save the stats when
stopping workers.
Note that this has the side effect of enabling parallel testing in
Travis CI.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
Apparently, Django's _destroy_test_db has a mostly unnecessary
sleep(1) before dropping the database, which obviously wastes a bunch
of time in the single-test runtime of their database teardown logic.
We work around this by monkey-patching that function to not do the sleep.
Instead of zulip_test, use zulip_test_template for backend DB. This
makes sure that the DB used by backend tests is different from the
DB, which will be zulip_test, used by Casper tests.
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.