Now that we are passing source realm's id instead of string_id in
source realm selector, it makes sense to rename the "source_realm" field
to "source_realm_id".
This allows access to be more configurable than just setting one
attribute. This can be configured by setting the setting
AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL.
We refactor check_has_permission_policies to check for all user roles for
each value of policy. This will help in handle a case where a guest is
allowed to do something but moderator isn't.
We need to do user_profile.refresh_from_db() in validation_func because
the realm object from user_profile is used in has_permission and we need
updated realm instance after changing the policy.
This is a follow-up commit to 9a4c58cb.
The tests for can_create_streams and can_subscribe_other_users shares a
lot of code and we deduplicate the code by extracting most of the code
as check_has_permission_policies which will now be called by the two
tests test_can_create_streams and test_can_subscribe_other_users.
This will also help in avoiding the duplication of code when we will
convert more policies to use COMMON_POLICY_TYPES.
Note that at this point, it's not possible to create moderator users;
this just will make it easier to write tests for logic involving them
as we develop the feature.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
Using web_public_guest for anonymous users is confusing since
'guest' is actually a logged-in user compared to
web_public_guest which is not logged-in and has only
read access to messages. So, we rename it to
web_public_visitor.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
A later commit alters `authenticate` of EmailAuthBackend to
add a store `needs_to_change_password` variable to session
which is useful to insist users on changing their weak password.
The tests start failing with that change because client.login()
runs `authenticate` without a `request` object. So, this commit
sends a request object with `request.session=self.client.session`
to self.client.login() in tests wherever needed.
This lets the backend tests pass if zilencer has been (manually)
removed from EXTRA_INSTALLED_APPS, by skipping the tests that require
it. test-backend complains that some URLs are untested in this case:
ERROR: Some URLs are untested! Here's the list of untested URLs:
api/v1/users/me/android_gcm_reg_id
api/v1/users/me/apns_device_token
team/
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit adds automatic detection of extra output (other than
printed by testing library or tools) in stderr and stdout by code under
test test-backend when it is run with flag --ban-console-output.
It also prints the test that produced the extra console output.
Fixes: #1587.
Commit c4254497b2
curiously had get_body() round tripping its data
through json load and dump.
I have seen this done for pretty-printing reasons,
but it doesn't apply here.
And if you're doing it for validation reasons,
you only need to do half the work, as my commit
here demonstrates.
We arguably don't even need the fail-fast code
here, since our fixtures are linted to be proper
json, I believe, plus downstream code probably
gives reasonably easy-to-diagnose symptoms.
We introduce get_payload for the relatively
exceptional cases where webhooks return payloads
as dicts.
Having a simple "str" type for get_body will
allow us to extract test helpers that use
payloads from get_body() without the ugly
`Union[str, Dict[str, str]]` annotations.
I also tightened up annotations in a few places
where we now call get_payload (using Dict[str, str]
instead of Dict[str, Any]).
In the zendesk test I explicitly stringify
one of the parameters to satisfy mypy.
We tighten up the mypy types here. And then
once we know that expected_message and expected_topic
are never None, we don't have call the do_test_message
and do_test_topic helpers any more, so we eliminate
them, too.
Finally, we don't return a message, since no tests
use the message currently.
This forces us to be a bit more explicit about testing
the three key values in any stream message, and it
also de-clutters the code a bit. I eventually want
to phase out do_test_topic and friends, since they
have the pitfall that you can call them and have them
do nothing, because they don't actually require
values to be be passed in.
I also clean up the code a bit for the tests that
have two new messages arriving.
Having an optional stream_name parameter makes
it confusing to read the code if you know your
webhook is sending private messages.
And then the other two callers are already
checking topics, so they might as well check
stream names, too.
We also have the two stream-oriented callers
make their own call to "subscribe". And we
future-proof this by making sure the exception
for no-message-being-sent calls out that gotcha.
Somewhat in passing, we now assert that
self.STREAM_NAME is not None in the main
helper. This is partly to satisfy mypy, but
it's also a good sanity check.
This also sets the stage for the next commit,
where I'll add an assert_stream_message helper.
Not all webhook payloads are json, so send_json_payload was a
bit misleading.
In passing I also remove "bytes" from the Union type for
"payload" parameter.
Almost all webhook tests use this helper, except a few
webhooks that write to private streams.
Being concise is important here, and the name
`self.send_and_test_stream_message` always confused
me, since it sounds you're sending a stream message,
and it leaves out the webhook piece.
We should consider renaming `send_and_test_private_message`
to something like `check_webhook_private`, but I couldn't
decide on a great name, and it's very rarely used. So
for now I just made sure the docstrings of the two
sibling functions reference each other.
This function is a bad idea, as it leads to a possible situation
where you aren't actually testing anything:
def do_test_message(self, msg: Message, expected_message: Optional[str]) -> None:
if expected_message is not None:
self.assertEqual(msg.content, expected_message)
Unfortunately, it's called deep in the stack in some places, but
we can safely replace it with assertEqual here.
The test helper here was taking an "expected_topic"
parameter that it just ignored, and then the
dialogflow tests were passing in expected messages
in that slot, so the actual "expected_message" var
was "None" and was ignored. So the tests weren't
testing anything.
Now we eliminate the crufty expected_topic parameter
and require an actual value for "expected_message".
I also clean up the mypy type for content_type,
and I remove the `content_type is None` check,
since all callers either pass in a str content
type or default to "application/json".
These weren’t wrong since orjson.JSONDecodeError subclasses
json.JSONDecodeError which subclasses ValueError, but the more
specific ones express the intention more clearly.
(ujson raised ValueError directly, as did json in Python 2.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We assert that the post was successful, to give
more immediate feedback for tests that don't
bother to check the return value and may be
implicitly assuming this method just works in
all cases.
And we also make it more convenient for tests
that are happy-path tests--they don't have to
do the assertion themselves. (And they're still
free to do deeper checks on the json.)
We opt out with allow_fail=True. We probably want
a more direct API eventually for tests that are
clearly trying to test the failure path for
subscribing to streams.
It's possible that a couple tests here that I added
allow_fail=True to just have flawed data setup--
I don't have time to investigate all cases, but
hopefully they will at least stand out more.
This adds a powerful end-to-end test for Zulip's API documentation:
For every documented API endpoint (with a few declared exceptions that
we hope to remove), we verify that every API response received by our
extensive backend test suite matches the declared schema.
This is a critical step towards being able to have complete, high
quality API documentation.
Fixes#15340.