We enable data_suffix option when creating Coverage instances which
causes the output files to include the hostname, pid, and random id.
Before each run erase is called which clears all existing coverage data
files. And then at the end of the test run use the combine method which
merges the reports.
We collect coverage in the main process which collects data from
imports and also when running in single process mode. In the workers we
collect coverage in run_subsuite. This creates more stats files than
strictly required but I don't see a better place to save the stats when
stopping workers.
Note that this has the side effect of enabling parallel testing in
Travis CI.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
Apparently, Django's _destroy_test_db has a mostly unnecessary
sleep(1) before dropping the database, which obviously wastes a bunch
of time in the single-test runtime of their database teardown logic.
We work around this by monkey-patching that function to not do the sleep.
Instead of zulip_test, use zulip_test_template for backend DB. This
makes sure that the DB used by backend tests is different from the
DB, which will be zulip_test, used by Casper tests.
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.
Something in c14e981e00 broken test
failures being reported properly; this isn't the right fix but works
and will let us avoid reverting the original change until it can be
fixed properly.
In Django, TestSuite can contain objects of type TestSuite as well
along with TestCases. This is why the run method of TestSuite is
responsible for iterating over items of TestSuite.
This also gives us access to result object which is used by unittest
to gather information about testcases.
We now instrument URL coverage whenever you run the back end tests,
and if you run the full suite and fail to test all endpoints, we
exit with a non-zero exit code and report failures to you.
If you are running just a subset of the test suite, you'll still
be able to see var/url_coverage.txt, which has some useful info.
With some tweaks to the output from tabbott.
Fixes#1441.
Previously, we suggested running
`python -c import zerver.tests.test_mytest`
when importing test_mytest failed, which doesn't work.
This commit adds the missing quotes, making it
`python -c 'import zerver.tests.test_mytest'`
This will lead to minor differences in the warnings that
people see when they run tests that are slow. We call out
the slowness a little more clearly from a visual standpoint,
and we simplify the calculation of the slowness threshold.
We still allow more time for tests with the `@slow` decorator
to run, but we don't use their expected_run_time.
Add two options to the `test-backend` script:
1. verbose
If given the `test-backend` script will give detailed output.
2. no-shallow
Default value is False. If given the `test-backend` script will
fail if it finds a template which is shallow tested.
The error message for a test file that doesn't import properly was
previously pretty difficult to understand and it wasn't clear how to
debug the issue.