Sometimes it's useful to run two copies of test-backend at the same
time. The problem with doing so is that we need to make sure no two
threads are using the same test database ID.
Previously, this worked only if at most one of those copies was
running in the single-threaded mode, because we used a random database
ID for the single-threaded code path, but the same IDs counting from 0
for the parallel code path.
Fix this, mostly, by generating a random start for the range of IDs
used by the process, and then counting off database IDs starting from
there (both in the parallel and non-paralllel modes).
There's still a very low probability race, see the TODO.
Additionally, there appear to be some other races with running two
copies of test-backend at the same time not related to the database.
See https://github.com/zulip/zulip/issues/12426 for a follow-up issue
that's sorta created by this.
This optimizes test-backend by skipping webhook
tests when run in default mode.
Tweaked by tabbott to extend the documentation and update the CI
commands.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
In Django 1.11, args argument of run_subsuite function is a tuple of
length 4. The new argument is the test runner class which we do not
need because we run tests using our own version of the test runner.
We enable data_suffix option when creating Coverage instances which
causes the output files to include the hostname, pid, and random id.
Before each run erase is called which clears all existing coverage data
files. And then at the end of the test run use the combine method which
merges the reports.
We collect coverage in the main process which collects data from
imports and also when running in single process mode. In the workers we
collect coverage in run_subsuite. This creates more stats files than
strictly required but I don't see a better place to save the stats when
stopping workers.
Note that this has the side effect of enabling parallel testing in
Travis CI.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
Apparently, Django's _destroy_test_db has a mostly unnecessary
sleep(1) before dropping the database, which obviously wastes a bunch
of time in the single-test runtime of their database teardown logic.
We work around this by monkey-patching that function to not do the sleep.
Instead of zulip_test, use zulip_test_template for backend DB. This
makes sure that the DB used by backend tests is different from the
DB, which will be zulip_test, used by Casper tests.
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.
Something in c14e981e00 broken test
failures being reported properly; this isn't the right fix but works
and will let us avoid reverting the original change until it can be
fixed properly.
In Django, TestSuite can contain objects of type TestSuite as well
along with TestCases. This is why the run method of TestSuite is
responsible for iterating over items of TestSuite.
This also gives us access to result object which is used by unittest
to gather information about testcases.
We now instrument URL coverage whenever you run the back end tests,
and if you run the full suite and fail to test all endpoints, we
exit with a non-zero exit code and report failures to you.
If you are running just a subset of the test suite, you'll still
be able to see var/url_coverage.txt, which has some useful info.
With some tweaks to the output from tabbott.
Fixes#1441.
Previously, we suggested running
`python -c import zerver.tests.test_mytest`
when importing test_mytest failed, which doesn't work.
This commit adds the missing quotes, making it
`python -c 'import zerver.tests.test_mytest'`
This will lead to minor differences in the warnings that
people see when they run tests that are slow. We call out
the slowness a little more clearly from a visual standpoint,
and we simplify the calculation of the slowness threshold.
We still allow more time for tests with the `@slow` decorator
to run, but we don't use their expected_run_time.
Add two options to the `test-backend` script:
1. verbose
If given the `test-backend` script will give detailed output.
2. no-shallow
Default value is False. If given the `test-backend` script will
fail if it finds a template which is shallow tested.
The error message for a test file that doesn't import properly was
previously pretty difficult to understand and it wasn't clear how to
debug the issue.
We now show the module name (e.g. "tests or test_hooks") in the
test output. This change also eliminates the intermediate use
of slashes in the test_name var, which was passed to
bounce_key_prefix_for_testing().
(imported from commit 58e73301037a0b07d7e437514c247f7cb559420e)
The file test_runner.py has our subclass of DjangoTestSuiteRunner
and various methods that help it work.
(imported from commit 8eca39a7ed3f8312c986224a810d4951559e7a8b)