This commit adds backend code to set email_address_visibility when
registering a new user. The realm-level default and the value of
source profile gets overridden by the value user selected during
signup.
This reverts commit 851d68e0fc.
That commit widened how long the transaction is open, which made it
much more likely that after the user was created in the transaction,
and the memcached caches were flushed, some other request will fill
the `get_realm_user_dicts` cache with data which did not include the
new user (because it had not been committed yet).
If a user creation request lost this race, the user would, upon first
request to `/`, get a blank page and a Javascript error:
Unknown user_id in get_by_user_id: 12345
...where 12345 was their own user-id. This error would persist until
the cache expired (in 7 days) or something else expunged it.
Reverting this does not prevent the race, as the post_save hook's call
to flush_user_profile is still in a transaction (and has been since
168f241ff0), and thus leaves the potential race window open.
However, it much shortens the potential window of opportunity, and is
a reasonable short-term stopgap.
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We change the do_create_user function to use transaction.atomic
decorator instead of using with block. Due to this change, all
send_event calls are made inside transaction.on_commit.
Some other changes -
- Remove transaction.atomic decorator from send_inital_realm_messages
since it is now called inside a transaction.
- Made changes in tests which tests message events and notifications
to make sure on_commit callbacks are executed.
Since we want to use `accounts/new/send_confirm` to know how many
users actually register after visiting the register page, we
added it to Google Tag Manager, but GTM tracks every user
registration separately due <email> in the URL
making it harder to track.
To solve this, we want to pass <email> as a GET parameter which
can be easily filtered inside GTM using a RegEx and all the
registrations can be tracked as one.
A missed message email notification, where the message is the welcome
message sent by the welcome bot on account creation, get sent when
the user somehow not focuses the browser tab during account creation.
No missed message email or push notifications should be sent for the
messages generated by the welcome bot.
'internal_send_private_message' accepts a parameter
'disable_external_notifications' and is set to 'True' when the sender
is 'welcome bot'.
A check is introduced in `trivially_should_not_notify`, not to notify
if `disable_external_notifications` is true.
TestCases are updated to include the `disable_external_notifications`
check in the early (False) return patterns of `is_push_notifiable` and
`is_email_notifiable`.
One query reduced for both `test_create_user_with_multiple_streams`
and `test_register`.
Reason: When welcome bot sends message after user creation
`do_send_messages` calls `get_active_presence_idle_user_ids`,
`user_ids` in `get_active_presence_idle_user_ids` remains empty if
`disable_external_notifications` is true because `is_notifiable` returns
false.
`get_active_presence_idle_user_ids` calls `filter_presence_idle_user_ids`
and since the `user_ids` is empty, the query inside the function doesn't
get executed.
MissedMessageHookTest updated.
Fixes: #22884
Additionally, migrate existing EditMessageTest to use this helper
method, with the side effect of migrating the tested flow from a
/json/messages URL to a /api/v1/messages URL.
Output message should talk about both the cases:
actual_count > expected_count and actual_count < expected_count.
The message now includes information for the case where
actual_query_count < expected_query_count.
Fixes: #23325
This adds a helper based on testing patterns of using the "queries_captured"
context manager with "assert_length" to check the number of queries
executed for preventing performance regression.
It explains the rationale of checking the query count through an
"AssertionError" and prints the queries captured as assert_length does,
but with a format optimized for displaying the queries in a more
readable manner.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit sets can_remove_subscribers_group to admins system
group while creating streams as it will be the default value
of this setting. In further we would provide an option to set
value of this setting to any user group while creating streams
using API or UI.
This allows us to use them with HttpResponse objects returned by
calling a view function directly.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We wrap methods of the django test client for the test suite, and
type keyword variadic arguments as `ClientArg` as it might called
with a mix of `bool` and `str`.
This is problematic when we call the original methods on the test
client as we attempt to unpack the dictionary of keyword arguments,
which has no type guarantee that certain keys that the test client
requires to be bool will certainly be bool.
For example, you can call
`self.client_post(url, info, follow="invalid")` without getting a
mypy error while the django test client requires `follow: bool`.
The unsafely typed keyword variadic arguments leads to error within
the body the wrapped test client functions as we call
`django_client.post` with `**kwargs` when django-stubs gets added,
making it necessary to refactor these wrappers for type safety.
The approach here minimizes the need to refactor callers, as we
keep `kwargs` being variadic while change its type from `ClientArg`
to `str` after defining all the possible `bool` arguments that might
previously appear in `kwargs`. We also copy the defaults from the
django test client as they are unlikely to change.
The tornado test cases are also refactored due to the change of
the signature of `set_http_headers` with the `skip_user_agent` being
added as a keyword argument. We want to unconditionally set this flag to
`True` because the `HTTP_USER_AGENT` is not supported. It also removes a
unnecessary duplication of an argument.
This is a part of the django-stubs refactorings.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We previously parsed any request with method other than {GET, POST} and
Content-Type other than multipart/form-data as if it were
application/x-www-form-urlencoded.
Check that Content-Type is application/x-www-form-urlencoded before
parsing the body that way. Restrict this logic to {DELETE, PATCH,
PUT} (having a body at all doesn’t make sense for {CONNECT, HEAD,
OPTIONS, TRACE}).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Since `HttpResponse` is an inaccurate representation of the
monkey-patched response object returned by the Django test client, we
replace it with `_MonkeyPatchedWSGIResponse` as `TestHttpResponse`.
This replaces `HttpResponse` in zerver/tests, analytics/tests, coporate/tests,
zerver/lib/test_classes.py, and zerver/lib/test_helpers.py with
`TestHttpResponse`. Several files in zerver/tests are excluded
from this substitution.
This commit is auto-generated by a script, with manual adjustments on certain
files squashed into it.
This is a part of the django-stubs refactorings.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit adds users to the appropriate system user group
based on their role. We also change the user groups when
changing role of the user.
We also add migration to add existing users to the appropriate
user groups.
This commit adds update_users_in_full_members_system_group which
is currently used to update the full members group on changing
role of a user. This function will be modified in next commit such
that it can be used to update full members group on changing
waiting_period_threshold setting of realm.
This also fixes a warning from
RealmExportTest.test_endpoint_local_uploads: “ResourceWarning:
unclosed file <_io.BufferedReader
name='/srv/zulip/var/…/test-export.tar.gz'>”.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
It is confusing to have the plan type constants not be namespaced
by the thing they represent. We already have a namespacing
convention in place for constants, so we should use it for
Realm.plan_type as well.
For users who are not logged in and for those who don't have
'prefers_web_public_view' set in session, we redirect them
to the default login page where they can choose to login
as spectator or authenticated user.
This commit adds can_create_web_public_streams helper
in models.py which will be used to validate whether
user is allowed to create a web-public stream or not.
This commit also adds the checks for Realm.POLICY_OWNERS_ONLY
in check_has_permission_policies.
This commit adds tests for POLICY_EVERYONE and POLICY_NOBODY
in check_has_permission_policies test. The original code
used these values but these were not covered in test.
This fixes a problem where we could not import zerver.lib.streams from
zerver.lib.message, which would otherwise be reasonable, because the
former implicitly imported many modules due to this issue.
Our convention is to always have authenticate() called with a request
object. We need to be consistent with that in tests too, to avoid test
failures resulting from breaking that assumption.
We modify assert_login_failure to call client.login() in the same way as
the other similar helpers - with a properly initialized HttpRequest
instance.
This fixes a bug where email notifications were sent for wildcard
mentions even if the `enable_offline_email_notifications` setting was
turned off.
This was because the `notification_data` class incorrectly considered
`wildcard_mentions_notify` as an indeoendent setting, instead of a wrapper
around `enable_offline_email_notifications` and `enable_offline_push_notifications`.
Also add a test for this case.
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.
This fixes a batch of mypy errors of the following format:
'Item "None" of "Optional[Something]" has no attribute "abc"
Since we have already been recklessly using these attritbutes
in the tests, adding assertions beforehand is justified presuming
that they oughtn't to be None.
* `stream_name`: This field is actually redundant. The email/push
notifications handlers don't use that field from the dict, and they
anyways query for the message, so we're safe in deleting this field,
even if in the future we end up needing the stream name.
* `timestamp`: This is totally unused by the email/push notification
handlers, and aren't sent to push clients either.
* `type` is used only for the push notifications handler, since only
push notifications can be revoked, so we move them to only run there.
This change allow check_webhook to raise an error when a message is
sent and vice versa. This is useful when one payload is not expecting
any output messages.
In addition to event filtering, we add support for registering supported
events for a webhook integration using the webhook_view decorator.
The event types are stored in the view function directly as a function
attribute, and can be later accessed via the module path and the view
function name are given (which is already specified the integrations.py)
Note that the WebhookTestCase doesn't know the name of the view function
and the module of the webhook. WEBHOOK_DIR_NAME needs to be overridden
if we want exceptions to raised when one of our test functions triggered
a unspecified event, but this practice is not enforced.
all_event_type does not need to be given even if event filters are used
in the webhook. But if a list of event types is given, it will be possible
for us to include it in the documentation while ensuring that all the
tested events are included (but not vice versa at the current stage, as
we yet not required all the events included in the list to be tested)
This guarantees that we can always access the list of all the tested
events of a webhook. This feature will be later plumbed to marcos to
display all event types dynamically in doc.md.
We will later use this data to include text like:
`<sender> mentioned @<user_group>` instead of the current
`<sender> mentioned you` when someone mentions a user group
the current user is a part of in email/push notification.
Part of #13080.
Since FIXTURE_DIR_NAME is the name of the folder that contains the view
and tests modules of the webhook and another folder called "fixtures" that
store the fixtures, it is more appropriate to call it WEBHOOK_DIR_NAME,
especially when we want to refer to the view module using this variable.
* Modify `maybe_enqueue_notifications` to take in an instance of the
dataclass introduced in 951b49c048.
* The `check_notify` tests tested the "when to notify" logic in a way
which involved `maybe_enqueue_notifications`. To simplify things, we've
earlier extracted this logic in 8182632d7e.
So, we just kill off the `check_notify` test, and keep only those parts
which verify the queueing and return value behavior of that funtion.
* We retain the the missedmessage_hook and message
message_edit_notifications since they are more integration-style.
* There's a slightly subtle change with the missedmessage_hook tests.
Before this commit, we short-circuited the hook if the sender was muted
(5a642cea11).
With this commit, we delegate the check to our dataclass methods.
So, `maybe_enqueue_notifications` will be called even if the sender was
muted, and the test needs to be updated.
* In our test helper `get_maybe_enqueue_notifications_parameters` which
generates default values for testing `maybe_enqueue_notifications` calls,
we keep `message_id`, `sender_id`, and `user_id` as required arguments,
so that the tests are super-clear and avoid accidental false positives.
* Because `do_update_embedded_data` also sends `update_message` events,
we deal with that case with some hacky code for now. See the comment
there.
This mostly completes the extraction of the "when to notify" logic into
our new `notification_data` module.
We will later consistently use these functions to check for notifiable
messages in the message send and event_queue code.
We have these functions accept the `sender_id` so that we can avoid the
`private_message = message["type"] == "private" and user_id != sender_id`
wizardy.
Further commits will hook `send_event` calls to `on_commit`
in some cases. This change will make it easier to test such
situations.
We don't need to actually capture the callbacks, because the
events sent are already tested via the list in which they are
captured by `tornado_redirected_to_list`.
Checked the email looked OK in `/emails` for both creating realm and
registering within an existing one.
Not sure zerver/tests/test_i18n.py test has been suppressed correctly.
Fixes#17786.
This is will make it easier to systematically use Django's
`capturOnCommitCallbacks` in tests outside of the main
`test_events` file which involve assertions on events.
Now that we are passing source realm's id instead of string_id in
source realm selector, it makes sense to rename the "source_realm" field
to "source_realm_id".
This allows access to be more configurable than just setting one
attribute. This can be configured by setting the setting
AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL.
We refactor check_has_permission_policies to check for all user roles for
each value of policy. This will help in handle a case where a guest is
allowed to do something but moderator isn't.
We need to do user_profile.refresh_from_db() in validation_func because
the realm object from user_profile is used in has_permission and we need
updated realm instance after changing the policy.
This is a follow-up commit to 9a4c58cb.
The tests for can_create_streams and can_subscribe_other_users shares a
lot of code and we deduplicate the code by extracting most of the code
as check_has_permission_policies which will now be called by the two
tests test_can_create_streams and test_can_subscribe_other_users.
This will also help in avoiding the duplication of code when we will
convert more policies to use COMMON_POLICY_TYPES.
Note that at this point, it's not possible to create moderator users;
this just will make it easier to write tests for logic involving them
as we develop the feature.