Test coverage was improved by removing an unused function and
removing some code (written by me) that was actually handling
Test Hook event types incorrectly.
It was a painful amount of work to generate the actual payload.
Since the only difference was a small build URL, I manually
edited the payload and used that for testing.
This commit gets our GitHub webhook up to 100% test coverage.
Note that Freshdesk allows custom templating for outgoing payloads
in their webhook UI. Therefore, the payloads added in this commit
did not have to be official payloads from Freshdesk.
The lack of coverage was due to:
* An unused function that was never used anywhere.
* get_commit_status_changed_body was using a regex where it didn't
really need to use one. And there was an if statement that
assumed that the payload might NOT contain the URL to the commit.
However, I checked the payload and there shouldn't be any instances
where a commit event is generated but there is no URL to the commit.
* get_push_tag_body had an `else` condition that really can't happen
in any payload. I verified this by checking the BitBucket webhook
docs.
We shouldn't just ignore exceptions when encoding the incoming
auth credentials. Even if the incoming credentials are properly
encoded, it is better to know when that is the case or if
something else fails.
Since this class was built, folks have always chosen
to subclass JsonableError for situations where
the default of ErrorCode.BAD_REQUEST is insufficient.
So now we simplify the use cases, which also gets
us 100% coverage on this core module.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
In this commit we are adding run_generate_fixtures_if_required,
a new function which is meant to de-duplicate a bit of code
between test-server and test-backend which is essentially
responsible for rebuilding the test database if that was required.
In this commit we are essentially just refactoring the function
is_template_database_current to be called template_database_status
and adjusting the return values accordingly.
This is essentially a preparatory commit for the upcoming commits
which will essentially enable us to not throw away entire DB and
rebuild from scratch if only running migrations could do the job.
This improves test coverage for a lot of our webhooks that relied
on ad-hoc methods to handle unexpected event types.
Note that I have deliberately skipped github_legacy, it isn't
advertised and is officially deprecated.
Also, I have refrained from making further changes to Trello, I
believe further improvements to test coverage should be covered
in separate per-webhook commits/PRs.
Fixes#9233.
Uses nargs='*' instead of nargs='argparse.REMAINDER'.
nargs='argparse.REMAINDER' gathers remaining terms as arguments
even if it is an option e.g --coverage, while '*' gathers all the
command-line arguments until the next option is encountered.
The only slash command implemented in this initial
version is an extremely crippled version of a
"/stats" slash command that reports that you are
running 1 server.
This has a cool structure, but it's written against the long-dead
South API, and we can always pull it out of the Git history if we want
to use this approach in the future.
Cleaned up add_user_list_args(). The "help" and
"all_users_help" have all default values. As noted in
an earlier commit, "all_users_help" is always passed in,
so we can get rid of "all_users_arg". We keep the default
for "all_users_help" so we don't have to change variable order
in function definition.
We've already got a bunch of other comments on work we need to do for
this decorator and an open issue that will ensure we at some point
rework this and add tests for it. In the meantime, I'd like to lock
down the rest of decorator.py at 100% coverage.
Fixes#1000.
These two classes are tricky to test, and nocoverage-ing them
allows us to mark queue_processors.py as fully covered. We
still want to cover these two workers at some point, but for
now, it's nice to enforce full coverage for any future changes
to queue_processors.py.
Fixes (sort of) #6542.
Now that the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples depended on code in the
tools/lib/* directory, it caused the production tests to fail since
the tools/ directory wouldn't exist in a production environment.
This reverts commit 66261f1cc. See parent commit for reason; here,
provision worked but `tools/run-dev.py` would give errors.
We need to figure out a test that reproduces these issues, then make a
version of these changes that keeps that test working, before we
re-merge them.
Tweaked by tabbott to not remove it from lister.py, linter_lib, and
friends, since those are intended to support both Python 2 and 3
(we're planning to extract them from the repository).
This adds snakeviz to dev tools and also updates the message displayed
upon running `test-backend` with `--profile` option to say how to run
snakeviz correctly when using vagrant development environment.
This causes `upgrade-zulip-from-git`, as well as a no-option run of
`tools/build-release-tarball`, to produce a Zulip install running
Python 3, rather than Python 2. In particular this means that the
virtualenv we create, in which all application code runs, is Python 3.
One shebang line, on `zulip-ec2-configure-interfaces`, explicitly
keeps Python 2, and at least one external ops script, `wal-e`, also
still runs on Python 2. See discussion on the respective previous
commits that made those explicit. There may also be some other
third-party scripts we use, outside of this source tree and running
outside our virtualenv, that still run on Python 2.
In this commit we basically start to override the request method of
httplib2.Http() to raise an exception whenever it is called i.e.
a trial is made to access the network from test suits.
Fixes: #1472.
The problem this commit solves is related to how we search for
testcases in the test files. We use a simple 'search_key' in file_data.
This will return a false positive if there is a 'longer_search_key' in
file_data.
While searching for TestCases we should use a longer key to remove the
possibility of collision. Using 'class TestCase(' should be precise
enough.
Fixes#4983
We enable data_suffix option when creating Coverage instances which
causes the output files to include the hostname, pid, and random id.
Before each run erase is called which clears all existing coverage data
files. And then at the end of the test run use the combine method which
merges the reports.
We collect coverage in the main process which collects data from
imports and also when running in single process mode. In the workers we
collect coverage in run_subsuite. This creates more stats files than
strictly required but I don't see a better place to save the stats when
stopping workers.
Note that this has the side effect of enabling parallel testing in
Travis CI.
This commit changes the backend testing framework to run
in parallel mode which is same as --processes=4. If --coverage
is supplied, we enforce serial mode, --processes=1, because
coverage is not compatible with parallel mode at the moment.
This fixes the fact that our test suites would have trouble connecting
to the other parts of the Zulip service when run with a proxy
configuration (e.g. trying to send requests to localhost through the
proxy!).
Thanks for Vishnu Ks for his work on this.
Fixes#971.
In backend tests, only call generate-fixtures when --generate-fixtures
is explicitly passed or is_template_database_current() returns False.
We don't need to flush cache for backend tests because we bounce the key
prefix used to create cache keys before running every test
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.