2016-08-18 22:31:42 +02:00
|
|
|
# Backend Django tests
|
|
|
|
|
2016-08-20 02:56:43 +02:00
|
|
|
## Overview
|
2016-08-18 22:31:42 +02:00
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
Zulip uses the Django framework for its Python backend. We
|
2016-08-20 02:56:43 +02:00
|
|
|
use the testing framework from
|
2021-11-05 20:09:57 +01:00
|
|
|
[django.test](https://docs.djangoproject.com/en/3.2/topics/testing/)
|
2023-11-28 17:55:17 +01:00
|
|
|
to test our code. We have thousands of automated tests that verify that
|
2016-08-20 02:56:43 +02:00
|
|
|
our backend works as expected.
|
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
All changes to the Zulip backend code should be supported by tests. We
|
2016-08-20 02:56:43 +02:00
|
|
|
enforce our testing culture during code review, and we also use
|
2021-08-20 21:53:28 +02:00
|
|
|
coverage tools to measure how well we test our code. We mostly use
|
2016-08-20 02:56:43 +02:00
|
|
|
tests to prevent regressions in our code, but the tests can have
|
|
|
|
ancillary benefits such as documenting interfaces and influencing
|
|
|
|
the design of our software.
|
|
|
|
|
|
|
|
If you have worked on other Django projects that use unit testing, you
|
2021-08-20 21:53:28 +02:00
|
|
|
will probably find familiar patterns in Zulip's code. This document
|
2020-09-19 01:39:07 +02:00
|
|
|
describes how to write tests for the Zulip backend, with a particular
|
2016-08-20 02:56:43 +02:00
|
|
|
emphasis on areas where we have either wrapped Django's test framework
|
|
|
|
or just done things that are kind of unique in Zulip.
|
|
|
|
|
|
|
|
## Running tests
|
|
|
|
|
|
|
|
Our tests live in `zerver/tests/`. You can run them with
|
2017-05-04 11:55:36 +02:00
|
|
|
`./tools/test-backend`. The tests run in parallel using multiple
|
|
|
|
threads in your development environment, and can finish in under 30s
|
2021-08-20 21:53:28 +02:00
|
|
|
on a fast machine. When you are in iterative mode, you can run
|
2017-05-04 11:55:36 +02:00
|
|
|
individual tests or individual modules, following the dotted.test.name
|
|
|
|
convention below:
|
2016-08-20 02:56:43 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```bash
|
|
|
|
cd /srv/zulip
|
|
|
|
./tools/test-backend zerver.tests.test_queue_worker.WorkerTest
|
|
|
|
```
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
There are many command line options for running Zulip tests, such
|
2021-08-20 21:53:28 +02:00
|
|
|
as a `--verbose` option. The
|
2016-08-20 02:56:43 +02:00
|
|
|
best way to learn the options is to use the online help:
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```bash
|
|
|
|
./tools/test-backend --help
|
|
|
|
```
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
We also have ways to instrument our tests for finding code coverage,
|
2021-08-20 21:53:28 +02:00
|
|
|
URL coverage, and slow tests. Use the `-h` option to discover these
|
|
|
|
features. We also have a `--profile` option to facilitate profiling
|
2016-08-20 02:56:43 +02:00
|
|
|
tests.
|
|
|
|
|
2021-07-31 01:15:06 +02:00
|
|
|
By default, `test-backend` will run all requested tests, and report
|
|
|
|
all failures at the end. You can configure it to stop after the first
|
|
|
|
error with the `--stop` option (or `-x`).
|
|
|
|
|
|
|
|
Another useful option is `--rerun`, which will rerun just the tests
|
|
|
|
that failed in the last test run.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
**Webhook integrations**. For performance, `test-backend` with no
|
2019-01-11 01:26:11 +01:00
|
|
|
arguments will not run webhook integration tests (`zerver/webhooks/`),
|
|
|
|
which would otherwise account for about 25% of the total runtime.
|
2021-09-08 00:23:24 +02:00
|
|
|
When working on webhooks, we recommend instead running
|
|
|
|
`test-backend zerver/webhooks` manually (or better, the direction for
|
2021-08-20 21:53:28 +02:00
|
|
|
the specific webhooks you're working on). And of course our CI is
|
2021-09-08 00:23:24 +02:00
|
|
|
configured to always use `test-backend --include-webhooks` and run all
|
|
|
|
of the tests.
|
2019-01-11 01:26:11 +01:00
|
|
|
|
2017-09-08 20:09:16 +02:00
|
|
|
## Writing tests
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
Before you write your first tests of Zulip, it is worthwhile to read
|
2021-06-19 21:58:59 +02:00
|
|
|
the rest of this document.
|
|
|
|
|
|
|
|
To get a hang of commonly used testing techniques, read
|
2021-09-01 00:15:31 +02:00
|
|
|
[zerver/tests/test_example.py](https://github.com/zulip/zulip/blob/main/zerver/tests/test_example.py).
|
2021-06-19 21:58:59 +02:00
|
|
|
You can also read some of the existing tests in `zerver/tests`
|
|
|
|
to get a feel for other patterns we use.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
A good practice is to get a "failing test" before you start to implement
|
2021-08-20 21:53:28 +02:00
|
|
|
your feature. First, it is a useful exercise to understand what needs to happen
|
2016-08-20 02:56:43 +02:00
|
|
|
in your tests before you write the code, as it can help drive out simple
|
2021-08-20 21:53:28 +02:00
|
|
|
design or help you make incremental progress on a large feature. Second,
|
|
|
|
you want to avoid introducing tests that give false positives. Ensuring
|
2016-08-20 02:56:43 +02:00
|
|
|
that a test fails before you implement the feature ensures that if somebody
|
|
|
|
accidentally regresses the feature in the future, the test will catch
|
|
|
|
the regression.
|
|
|
|
|
2016-11-10 19:30:09 +01:00
|
|
|
Another important files to skim are
|
2021-09-01 00:15:31 +02:00
|
|
|
[zerver/lib/test_helpers.py](https://github.com/zulip/zulip/blob/main/zerver/lib/test_helpers.py),
|
2016-11-10 19:30:09 +01:00
|
|
|
which contains test helpers.
|
2021-09-01 00:15:31 +02:00
|
|
|
[zerver/lib/test_classes.py](https://github.com/zulip/zulip/blob/main/zerver/lib/test_classes.py),
|
2016-11-10 19:30:09 +01:00
|
|
|
which contains our `ZulipTestCase` and `WebhookTestCase` classes.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
### Setting up data for tests
|
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
All tests start with the same fixture data. (The tests themselves
|
2016-08-20 02:56:43 +02:00
|
|
|
update the database, but they do so inside a transaction that gets
|
|
|
|
rolled back after each of the tests complete. For more details on how the
|
|
|
|
fixture data gets set up, refer to `tools/setup/generate-fixtures`.)
|
|
|
|
|
|
|
|
The fixture data includes a few users that are named after
|
|
|
|
Shakesepeare characters, and they are part of the "zulip.com" realm.
|
|
|
|
|
|
|
|
Generally, you will also do some explicit data setup of your own. Here
|
2016-08-23 02:08:42 +02:00
|
|
|
are a couple useful methods in ZulipTestCase:
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
- common_subscribe_to_streams
|
|
|
|
- send_message
|
2016-10-21 23:23:25 +02:00
|
|
|
- make_stream
|
2016-08-20 02:56:43 +02:00
|
|
|
- subscribe_to_stream
|
|
|
|
|
|
|
|
More typically, you will use methods directly from the backend code.
|
|
|
|
(This ensures more end-to-end testing, and avoids false positives from
|
|
|
|
tests that might not consider ancillary parts of data setup that could
|
|
|
|
influence tests results.)
|
|
|
|
|
|
|
|
Here are some example action methods that tests may use for data setup:
|
|
|
|
|
|
|
|
- check_send_message
|
2020-05-27 23:13:05 +02:00
|
|
|
- do_change_user_role
|
2016-08-20 02:56:43 +02:00
|
|
|
- do_create_user
|
|
|
|
- do_make_stream_private
|
|
|
|
|
2019-06-12 23:40:14 +02:00
|
|
|
### Testing code that accesses the filesystem
|
|
|
|
|
2019-07-13 02:52:08 +02:00
|
|
|
Some tests need to access the filesystem (e.g. `test_upload.py` tests
|
2021-08-20 21:53:28 +02:00
|
|
|
for `LocalUploadBackend` and the data import tests). Doing
|
2019-07-13 02:52:08 +02:00
|
|
|
this correctly requires care to avoid problems like:
|
2021-08-20 22:54:08 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Leaking files after every test (which are clutter and can eventually
|
2021-08-20 22:54:08 +02:00
|
|
|
run the development environment out of disk) or
|
2021-08-20 21:45:39 +02:00
|
|
|
- Interacting with other parallel processes of this `test-backend` run
|
2021-08-20 22:54:08 +02:00
|
|
|
(or another `test-backend` run), or with later tests run by this
|
|
|
|
process.
|
2019-07-13 02:52:08 +02:00
|
|
|
|
|
|
|
To avoid these problems, you can do the following:
|
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Use a subdirectory of `settings.TEST_WORKER_DIR`; this is a
|
2019-07-13 02:52:08 +02:00
|
|
|
subdirectory of `/var/<uuid>/test-backend` that is unique to the
|
|
|
|
test worker thread and will be automatically deleted when the
|
|
|
|
relevant `test-backend` process finishes.
|
2021-08-20 21:45:39 +02:00
|
|
|
- Delete any files created by the test in the test class's `tearDown`
|
2019-07-13 02:52:08 +02:00
|
|
|
method (which runs even if the test fails); this is valuable to
|
|
|
|
avoid conflicts with other tests run later by the same test process.
|
2019-06-12 23:40:14 +02:00
|
|
|
|
|
|
|
Our common testing infrastructure handles some of this for you,
|
|
|
|
e.g. it replaces `settings.LOCAL_UPLOADS_DIR` for each test process
|
2021-08-20 21:53:28 +02:00
|
|
|
with a unique path under `/var/<uuid>/test-backend`. And
|
2019-06-12 23:40:14 +02:00
|
|
|
`UploadSerializeMixin` manages some of the cleanup work for
|
2019-07-13 02:52:08 +02:00
|
|
|
`test_upload.py`.
|
2019-06-12 23:40:14 +02:00
|
|
|
|
2017-09-08 20:09:16 +02:00
|
|
|
### Testing with mocks
|
|
|
|
|
2019-07-13 02:52:08 +02:00
|
|
|
This section is a beginner's guide to mocking with Python's
|
|
|
|
`unittest.mock` library. It will give you answers to the most common
|
|
|
|
questions around mocking, and a selection of commonly used mocking
|
|
|
|
techniques.
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
#### What is mocking?
|
|
|
|
|
2021-08-20 22:54:08 +02:00
|
|
|
When writing tests, _mocks allow you to replace methods or objects with fake entities
|
|
|
|
suiting your testing requirements_. Once an object is mocked, **its original code does not
|
2017-09-08 20:09:16 +02:00
|
|
|
get executed anymore**.
|
|
|
|
|
|
|
|
Rather, you can think of a mocked object as an initially empty shell:
|
|
|
|
Calling it won't do anything, but you can fill your shell with custom code, return values, etc.
|
|
|
|
Additionally, you can observe any calls made to your mocked object.
|
|
|
|
|
|
|
|
#### Why is mocking useful?
|
|
|
|
|
2017-09-11 10:53:07 +02:00
|
|
|
When writing tests, it often occurs that you make calls to functions
|
|
|
|
taking complex arguments. Creating a real instance of such an argument
|
|
|
|
would require the use of various different libraries, a lot of
|
2021-08-20 21:53:28 +02:00
|
|
|
boilerplate code, etc. Another scenario is that the tested code
|
2017-09-11 10:53:07 +02:00
|
|
|
accesses files or objects that don't exist at testing time. Finally,
|
|
|
|
it is good practice to keep tests independent from others. Mocks help
|
|
|
|
you to isolate test cases by simulating objects and methods irrelevant
|
|
|
|
to a test's goal.
|
|
|
|
|
|
|
|
In all of these cases, you can "mock out" the function calls / objects
|
|
|
|
and replace them with fake instances that only implement a limited
|
|
|
|
interface. On top of that, these fake instances can be easily
|
|
|
|
analyzed.
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-04-02 03:52:11 +02:00
|
|
|
Say you have a module `greetings` defining the following functions:
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
def fetch_database(key: str) -> str:
|
|
|
|
# ...
|
|
|
|
# Do some look-ups in a database
|
|
|
|
return data
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
def greet(name_key: str) -> str:
|
|
|
|
name = fetch_database(name_key)
|
|
|
|
return "Hello" + name
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- You want to test `greet()`.
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- In your test, you want to call `greet("Mario")` and verify that it returns the correct greeting:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
from greetings import greet
|
2021-04-02 03:52:11 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
def test_greet() -> str:
|
|
|
|
greeting = greet("Mario")
|
|
|
|
assert greeting == "Hello Mr. Mario Mario"
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
-> **You have a problem**: `greet()` calls `fetch_database()`. `fetch_database()` does some look-ups in
|
2021-08-20 22:54:08 +02:00
|
|
|
a database. _You haven't created that database for your tests, so your test would fail, even though
|
|
|
|
the code is correct._
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Luckily, you know that `fetch_database("Mario")` should return "Mr. Mario Mario".
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 22:54:08 +02:00
|
|
|
- _Hint_: Sometimes, you might not know the exact return value, but one that is equally valid and works
|
2017-09-08 20:09:16 +02:00
|
|
|
with the rest of the code. In that case, just use this one.
|
|
|
|
|
|
|
|
-> **Solution**: You mock `fetch_database()`. This is also referred to as "mocking out" `fetch_database()`.
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
from unittest.mock import patch
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
def test_greet() -> None:
|
|
|
|
# Mock `fetch_database()` with an object that acts like a shell: It still accepts calls like `fetch_database()`,
|
|
|
|
# but doesn't do any database lookup. We "fill" the shell with a return value; This value will be returned on every
|
|
|
|
# call to `fetch_database()`.
|
|
|
|
with patch("greetings.fetch_database", return_value="Mr. Mario Mario"):
|
|
|
|
greeting = greetings.greet("Mario")
|
|
|
|
assert greeting == "Hello Mr. Mario Mario"
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
That's all. Note that **this mock is suitable for testing `greet()`, but not for testing `fetch_database()`**.
|
|
|
|
More generally, you should only mock those functions you explicitly don't want to test.
|
|
|
|
|
|
|
|
#### How does mocking work under the hood?
|
|
|
|
|
|
|
|
Since Python 3.3, the standard mocking library is `unittest.mock`. `unittest.mock` implements the basic mocking class `Mock`.
|
2018-11-13 21:20:08 +01:00
|
|
|
It also implements `MagicMock`, which is the same as `Mock`, but contains many default magic methods (in Python,
|
2017-09-08 20:09:16 +02:00
|
|
|
those are the ones starting with with a dunder `__`). From the docs:
|
|
|
|
|
|
|
|
> In most of these examples the Mock and MagicMock classes are interchangeable. As the MagicMock is the more capable class
|
2021-08-20 22:54:08 +02:00
|
|
|
> it makes a sensible one to use by default.
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
`Mock` itself is a class that principally accepts and records any and all calls. A piece of code like
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
from unittest import mock
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
foo = mock.Mock()
|
|
|
|
foo.bar('quux')
|
|
|
|
foo.baz
|
|
|
|
foo.qux = 42
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 22:54:08 +02:00
|
|
|
is _not_ going to throw any errors. Our mock silently accepts all these calls and records them.
|
2017-09-08 20:09:16 +02:00
|
|
|
`Mock` also implements methods for us to access and assert its records, e.g.
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
foo.bar.assert_called_with('quux')
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
Finally, `unittest.mock` also provides a method to mock objects only within a scope: `patch()`. We can use `patch()` either
|
|
|
|
as a decorator or as a context manager. In both cases, the mock created by `patch()` will apply for the scope of the decorator /
|
2021-08-20 22:54:08 +02:00
|
|
|
context manager. `patch()` takes only one required argument `target`. `target` is a string in dot notation that _refers to
|
|
|
|
the name of the object you want to mock_. It will then assign a `MagicMock()` to that object.
|
2017-09-08 20:09:16 +02:00
|
|
|
As an example, look at the following code:
|
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
from unittest import mock
|
|
|
|
from os import urandom
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
with mock.patch('__main__.urandom', return_value=42):
|
|
|
|
print(urandom(1))
|
|
|
|
print(urandom(1)) # No matter what value we plug in for urandom, it will always return 42.
|
|
|
|
print(urandom(1)) # We exited the context manager, so the mock doesn't apply anymore. Will return a random byte.
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 22:54:08 +02:00
|
|
|
_Note that calling `mock.patch('os.urandom', return_value=42)` wouldn't work here_: `os.urandom` would be the name of our patched
|
2017-09-08 20:09:16 +02:00
|
|
|
object. However, we imported `urandom` with `from os import urandom`; hence, we bound the `urandom` name to our current module
|
|
|
|
`__main__`.
|
|
|
|
|
|
|
|
On the other hand, if we had used `import os.urandom`, we would need to call `mock.patch('os.urandom', return_value=42)` instead.
|
|
|
|
|
|
|
|
#### Boilerplate code
|
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Including the Python mocking library:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
from unittest import mock
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Mocking a class with a context manager:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
with mock.patch('module.ClassName', foo=42, return_value='I am a mock') as my_mock:
|
|
|
|
# In here, 'module.ClassName' is mocked with a MagicMock() object my_mock.
|
|
|
|
# my_mock has an attribute named foo with the value 42.
|
|
|
|
# var = module.ClassName() will assign 'I am a mock' to var.
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Mocking a class with a decorator:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
@mock.patch('module.ClassName', foo=42, return_value='I am a mock')
|
|
|
|
def my_function(my_mock):
|
|
|
|
# ...
|
|
|
|
# In here, 'module.ClassName' will behave as in the previous example.
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 21:45:39 +02:00
|
|
|
- Mocking a class attribute:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
with mock.patch.object(module.ClassName, 'class_method', return_value=42)
|
|
|
|
# In here, 'module.ClassName' has the same properties as before, except for 'class_method'
|
|
|
|
# Calling module.ClassName.class_method() will now return 42.
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
|
|
|
Note the missing quotes around module.ClassName in the patch.object() call.
|
|
|
|
|
|
|
|
#### Zulip mocking practices
|
|
|
|
|
|
|
|
For mocking we generally use the "mock" library and use `mock.patch` as
|
2021-08-20 21:53:28 +02:00
|
|
|
a context manager or decorator. We also take advantage of some context managers
|
|
|
|
from Django as well as our own custom helpers. Here is an example:
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```python
|
|
|
|
with self.settings(RATE_LIMITING=True):
|
|
|
|
with mock.patch('zerver.decorator.rate_limit_user') as rate_limit_mock:
|
|
|
|
api_result = my_webhook(request)
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
self.assertTrue(rate_limit_mock.called)
|
|
|
|
```
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2022-02-16 01:39:15 +01:00
|
|
|
Follow [this link](../subsystems/settings.md#testing-non-default-settings) for more
|
2017-09-08 20:09:16 +02:00
|
|
|
information on the "settings" context manager.
|
|
|
|
|
2021-06-09 00:41:13 +02:00
|
|
|
Zulip has several features, like outgoing webhooks or social
|
|
|
|
authentication, that made outgoing HTTP requests to external
|
|
|
|
servers. We test those features using the excellent
|
|
|
|
[responses](https://pypi.org/project/responses/) library, which has a
|
|
|
|
nice interface for mocking `requests` calls to return whatever HTTP
|
2022-02-08 00:13:33 +01:00
|
|
|
response from the external server we need for the test. you can find
|
2021-06-09 00:41:13 +02:00
|
|
|
examples with `git grep responses.add`. Zulip's own `HostRequestMock`
|
|
|
|
class should be used only for low-level tests for code that expects to
|
|
|
|
receive Django HttpRequest object.
|
2017-09-08 20:09:16 +02:00
|
|
|
|
2020-08-11 01:47:54 +02:00
|
|
|
## Zulip testing philosophy
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
If there is one word to describe Zulip's philosophy for writing tests,
|
2021-08-20 21:53:28 +02:00
|
|
|
it is probably "flexible." (Hopefully "thorough" goes without saying.)
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
When in doubt, unless speed concerns are prohibitive,
|
|
|
|
you usually want your tests to be somewhat end-to-end, particularly
|
|
|
|
for testing endpoints.
|
|
|
|
|
|
|
|
These are some of the testing strategies that you will see in the Zulip
|
|
|
|
test suite...
|
|
|
|
|
|
|
|
### Endpoint tests
|
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
We strive to test all of our URL endpoints. The vast majority of Zulip
|
|
|
|
endpoints support a JSON interface. Regardless of the interface, an
|
2016-08-20 02:56:43 +02:00
|
|
|
endpoint test generally follows this pattern:
|
|
|
|
|
|
|
|
- Set up the data.
|
2020-08-11 02:20:10 +02:00
|
|
|
- Log in with `self.login()` or set up an API key.
|
2016-08-20 02:56:43 +02:00
|
|
|
- Use a Zulip test helper to hit the endpoint.
|
|
|
|
- Assert that the result was either a success or failure.
|
|
|
|
- Check the data that comes back from the endpoint.
|
|
|
|
|
|
|
|
Generally, if you are doing endpoint tests, you will want to create a
|
2016-08-23 02:08:42 +02:00
|
|
|
test class that is a subclass of `ZulipTestCase`, which will provide
|
2016-08-20 02:56:43 +02:00
|
|
|
you helper methods like the following:
|
|
|
|
|
|
|
|
- api_auth
|
|
|
|
- assert_json_error
|
|
|
|
- assert_json_success
|
|
|
|
- client_get
|
|
|
|
- client_post
|
|
|
|
- get_api_key
|
|
|
|
- get_streams
|
|
|
|
- login
|
|
|
|
- send_message
|
|
|
|
|
|
|
|
### Library tests
|
|
|
|
|
|
|
|
For certain Zulip library functions, especially the ones that are
|
|
|
|
not intrinsically tied to Django, we use a classic unit testing
|
|
|
|
approach of calling the function and inspecting the results.
|
|
|
|
|
|
|
|
For these types of tests, you will often use methods like
|
|
|
|
`self.assertEqual()`, `self.assertTrue()`, etc., which come with
|
|
|
|
[unittest](https://docs.python.org/3/library/unittest.html#unittest.TestCase)
|
|
|
|
via Django.
|
|
|
|
|
|
|
|
### Fixture-driven tests
|
|
|
|
|
|
|
|
Particularly for testing Zulip's integrations with third party systems,
|
2021-08-20 21:53:28 +02:00
|
|
|
we strive to have a highly data-driven approach to testing. To give a
|
2016-08-20 02:56:43 +02:00
|
|
|
specific example, when we test our GitHub integration, the test code
|
|
|
|
reads a bunch of sample inputs from a JSON fixture file, feeds them
|
2016-10-18 01:31:50 +02:00
|
|
|
to our GitHub integration code, and then verifies the output against
|
2016-08-20 02:56:43 +02:00
|
|
|
expected values from the same JSON fixture file.
|
|
|
|
|
2018-04-19 20:17:24 +02:00
|
|
|
Our fixtures live in `zerver/tests/fixtures`.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
### Mocks and stubs
|
|
|
|
|
|
|
|
We use mocks and stubs for all the typical reasons:
|
|
|
|
|
|
|
|
- to more precisely test the target code
|
|
|
|
- to stub out calls to third-party services
|
2017-07-04 22:13:38 +02:00
|
|
|
- to make it so that you can [run the Zulip tests on the airplane without wifi][no-internet]
|
|
|
|
|
2022-02-16 01:39:15 +01:00
|
|
|
[no-internet]: testing.md#internet-access-inside-test-suites
|
2016-08-20 02:56:43 +02:00
|
|
|
|
2017-09-08 20:09:16 +02:00
|
|
|
A detailed description of mocks, along with useful coded snippets, can be found in the section
|
|
|
|
[Testing with mocks](#testing-with-mocks).
|
2017-07-04 22:13:38 +02:00
|
|
|
|
2016-08-20 02:56:43 +02:00
|
|
|
### Template tests
|
|
|
|
|
2021-09-01 00:15:31 +02:00
|
|
|
In [zerver/tests/test_templates.py](https://github.com/zulip/zulip/blob/main/zerver/tests/test_templates.py)
|
2020-09-19 01:39:07 +02:00
|
|
|
we have a test that renders all of our backend templates with
|
2016-08-20 02:56:43 +02:00
|
|
|
a "dummy" context, to make sure the templates don't have obvious
|
2021-08-20 21:53:28 +02:00
|
|
|
errors. (These tests won't catch all types of errors; they are
|
2016-08-20 02:56:43 +02:00
|
|
|
just a first line of defense.)
|
|
|
|
|
|
|
|
### SQL performance tests
|
|
|
|
|
|
|
|
A common class of bug with Django systems is to handle bulk data in
|
2020-09-19 01:39:07 +02:00
|
|
|
an inefficient way, where the backend populates objects for join tables
|
2021-08-20 21:53:28 +02:00
|
|
|
with a series of individual queries that give O(N) latency. (The
|
2016-08-20 02:56:43 +02:00
|
|
|
remedy is often just to call `select_related()`, but sometimes it
|
|
|
|
requires a more subtle restructuring of the code.)
|
|
|
|
|
|
|
|
We try to prevent these bugs in our tests by using a context manager
|
|
|
|
called `queries_captured()` that captures the SQL queries used by
|
2021-08-20 21:53:28 +02:00
|
|
|
the backend during a particular operation. We make assertions about
|
2022-10-15 22:47:40 +02:00
|
|
|
those queries, often simply by using the `assert_database_query_count`
|
|
|
|
that checks the number of queries.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
### Event-based tests
|
|
|
|
|
2020-09-19 01:39:07 +02:00
|
|
|
The Zulip backend has a mechanism where it will fetch initial data
|
2016-08-20 02:56:43 +02:00
|
|
|
for a client from the database, and then it will subsequently apply
|
|
|
|
some queued up events to that data to the data structure before notifying
|
2021-08-20 21:53:28 +02:00
|
|
|
the client. The `BaseAction.do_test()` helper helps tests
|
2016-08-20 02:56:43 +02:00
|
|
|
verify that the application of those events via apply_events() produces
|
|
|
|
the same data structure as performing an action that generates said event.
|
|
|
|
|
|
|
|
This is a bit esoteric, but if you read the tests, you will see some of
|
2021-08-20 21:53:28 +02:00
|
|
|
the patterns. You can also learn more about our event system in the
|
2022-02-16 01:39:15 +01:00
|
|
|
[new feature tutorial](../tutorials/new-feature-tutorial.md#handle-database-interactions).
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
### Negative tests
|
|
|
|
|
|
|
|
It is important to verify error handling paths for endpoints, particularly
|
|
|
|
situations where we need to ensure that we don't return results to clients
|
2021-08-20 21:53:28 +02:00
|
|
|
with improper authentication or with limited authorization. A typical test
|
2016-08-20 02:56:43 +02:00
|
|
|
will call the endpoint with either a non-logged in client, an invalid API
|
2021-08-20 21:53:28 +02:00
|
|
|
key, or missing input fields. Then the test will call `assert_json_error()`
|
2016-08-20 02:56:43 +02:00
|
|
|
to verify that the endpoint is properly failing.
|
|
|
|
|
|
|
|
## Testing considerations
|
|
|
|
|
|
|
|
Here are some things to consider when writing new tests:
|
|
|
|
|
|
|
|
- **Duplication** We try to avoid excessive duplication in tests.
|
2021-08-20 22:54:08 +02:00
|
|
|
If you have several tests repeating the same type of test setup,
|
|
|
|
consider making a setUp() method or a test helper.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
- **Network independence** Our tests should still work if you don't
|
2021-08-20 22:54:08 +02:00
|
|
|
have an internet connection. For third party clients, you can simulate
|
|
|
|
their behavior using fixture data. For third party servers, you can
|
|
|
|
typically simulate their behavior using mocks.
|
2016-08-20 02:56:43 +02:00
|
|
|
|
|
|
|
- **Coverage** We have 100% line coverage on several of our backend
|
2021-08-20 22:54:08 +02:00
|
|
|
modules. You can use the `--coverage` option to generate coverage
|
|
|
|
reports, and new code should have 100% coverage, which generally
|
|
|
|
requires testing not only the "happy path" but also error handling
|
|
|
|
code and edge cases. It will generate a nice HTML report that you can
|
|
|
|
view right from your browser (the tool prints the URL where the report
|
|
|
|
is exposed in your development environment).
|
2017-03-23 19:59:24 +01:00
|
|
|
|
2022-05-14 22:52:31 +02:00
|
|
|
The HTML report also displays which tests executed each line, which
|
|
|
|
can be handy for finding existing tests for a code path you're
|
|
|
|
working on.
|
|
|
|
|
2020-08-27 01:56:36 +02:00
|
|
|
- **Console output** A properly written test should print nothing to
|
2021-08-20 22:54:08 +02:00
|
|
|
the console; use `with self.assertLogs` to capture and verify any
|
|
|
|
logging output. Note that we reconfigure various loggers in
|
|
|
|
`zproject/test_extra_settings.py` where the output is unlikely to be
|
|
|
|
interesting when running our test suite.
|
|
|
|
`test-backend --ban-console-output` checks for stray print statements.
|
2020-08-27 01:56:36 +02:00
|
|
|
|
2017-03-23 19:59:24 +01:00
|
|
|
Note that `test-backend --coverage` will assert that
|
2017-02-19 01:26:52 +01:00
|
|
|
various specific files in the project have 100% test coverage and
|
2021-08-20 21:53:28 +02:00
|
|
|
throw an error if their coverage has fallen. One of our project goals
|
2017-02-19 01:26:52 +01:00
|
|
|
is to expand that checking to ever-larger parts of the codebase.
|