TestMaybeSendToRegistration needs tweaking here, because it wasn't
setting the subdomain for the dummy request, so
maybe_send_to_registration was actually running with realm=None, which
is not right for these tests.
Also, test_sso_only_when_preregistration_user_exists was creating
PreregistrationUser without setting the realm, which was also incorrect.
create_preregistration_user is a footgun, because it takes the realm
from the request. The calling code is supposed to validate that
registration for the realm is allowed
first, but can sometimes do that on "realm" taken from something else
than the request - and later on calls create_preregistration_user, thus
leading to prereg user creation on unvalidated request.realm.
It's safer, and makes more sense, for this function to take the intended
realm as argument, instead of taking the entire request. It follows that
the same should be done for prepare_activation_url.
In these tests, the code ends up with a logged in session when it's
undesired - later on these tests make requests to a different subdomain
- where a logged in session is not supposed to exist. This leads to an
unintended, strange situation where request.user is a user from the old
subdomain but the request itself is to a *different* subdomain. This
throws off get_realm_from_request, which will return the realm from
request.user.realm - which is not what these tests want and can lead to
these tests failing when some of the production code being tested
switches to using get_realm_from_request instead of
get_realm(get_subdomain).
The codepaths for joining an organization via a multi-use invitation
(accounts_home_from_multiuse_invite and maybe_send_to_registration)
weren't validating whether
the organization the invite was generated for matches the organization
the user attempts to join - potentially allowing an attacker with access
to organization A to generate a multi-use invite and use it to join
organization B within the same deployment, that they shouldn't have
access to.
The database value for expiry_date is None for the invite
that will never expire and the clients send -1 as value
in the API similar to the message retention setting.
Also, when passing invite_expire_in_days as an argument
in various functions, invite_expire_in_days is passed as
-1 for "Never expires" option since invite_expire_in_days
is an optional argument in some functions and thus we cannot
pass "None" value.
A migration should not import zerver.lib.streams at all, but this
solves the immediate problem with check-database-compatibility.py.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Removes `client` parameter from backend tests using the
`POST /messages` endpoint when the test can use the default
`User-Agent` as the client, which is set to `ZulipMobile` for API
requests and a browser user agent string for web app requests.
Various tests use the `PATCH /stream/{stream_id} endpoint in
`test_subs.py`. Because the stream id is in the URL path, it
does not also need to be passed as a query parameter.
Removes instances of `stream_name` being passed as a query
parameter to tests.
Removes `topic_name` parameter in `test_message_flags.py`
where is being passed to a test for marking a stream as
read because it is an ignored parameter for that endpoint.
This was only used for upgrading from Zulip < 1.9.0, which is no
longer possible because Zulip < 2.1.0 had no common supported
platforms with current main.
If we ever want this optimization for a future migration, it would be
better implemented using Django merge migrations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
After failing to notice a place where we wanted to hide timezone
information, we decided to add timezones to some of the test
users, so that we can better consider the effects of timezones
when manually testing.
Testing:
* ran populate_db and confirmed users had timezones in the UI
* updated test_populate_db.py
Adds `realm_web_public_access_enabled` as a realm-specific server
setting potentially returned by the `/get-server-settings` endpoint
so that clients that support browsing of web-public stream content
without an account can generate a login page that supports that
type of access.
Various backend tests use the `PATCH /messages/{msg_id}` endpoint.
For that endpoint, the message ID is encoded in the URL path and
ignored if provided as a parameter in the the query.
Verified that the tests were providing the same message ID to both
the path and then removed the ignored parameter in the query.
This commit refactors get_user_by_email function
to use access_user_by_email which is similar to
already existing access_user_by_id and thus using
get_user_data function added recently.
We also remove the unnecessary check for email as
email will always be passed to this endpoint.
Preparatory commit for #10970.
This commit adds get_user_data which is called by
get_members_backend to compute the client_gravatar
value and then return the data of a single user or
all accessible users.
This function will also be used by get_user_by_email
in further commtis.
When pulling batches out of the ScheduledEmail list in a single
transaction, an unexpected failure to send an email will result in the
whole batch getting retried. This will result in infinite email
sending loops.
Pull a single row off at a time and send it. We continue without
retries to the next email on EmailNotDeliveredException, but will
retry infinitely on other exceptions.
Fixes: #20943.
Putting all of the logic in a `finally` block is equivalent to a bare
`except` block, which silently consumes all exceptions.
Move only the most-necessary parts into the except; this lets
`BadImageError` exceptions from `zerver/lib/upload.py` to escape,
allowing better the generic "Image file upload failed" to be replaced
with a more specific message.
It also allows unexpected exceptions, as the previous commit resolved,
to escape and 500. This lets them be detected and resolved, rather
than give users a silently bad experience.
5dab6e9d31 began honoring the list of disposals for every frame.
Unfortunately, passing a list of disposals for a non-animated image
raises an exception:
```
File "zerver/lib/upload.py", line 212, in resize_emoji
image_data = resize_gif(im, size)
File "zerver/lib/upload.py", line 165, in resize_gif
frames[0].save(
File "[...]/PIL/Image.py", line 2212, in save
save_handler(self, fp, filename)
File "[...]/PIL/GifImagePlugin.py", line 605, in _save
_write_single_frame(im, fp, palette)
File "[...]/PIL/GifImagePlugin.py", line 506, in _write_single_frame
_write_local_header(fp, im, (0, 0), flags)
File "[...]/PIL/GifImagePlugin.py", line 647, in _write_local_header
disposal = int(im.encoderinfo.get("disposal", 0))
TypeError: int() argument must be a string, a bytes-like object or a
number, not 'list'
```
`check_add_realm_emoji` calls this as:
```
try:
is_animated = upload_emoji_image(image_file, emoji_file_name, a
uthor)
emoji_uploaded_successfully = True
finally:
if not emoji_uploaded_successfully:
realm_emoji.delete()
return None
# ...
```
This is equivalent to dropping _all_ exceptions silently. As such,
Zulip has silently rejected all non-animated images larger than 64x64
since 5dab6e9d31.
Adjust to only pass a single disposal if there are no additional
frames. Add a test for non-animated images, which requires also
fixing the incidental bug that all GIF images were being recorded as
animated, regardless of if they had more than 1 frame or not.
The deferred_work events can't be processed in the test suite
environment, since it tries to make requests to the host <realm.uri>
which is "zulip.testserver" and obviously not going to work. Since the
test suite base data set has no animated emoji, there's nothing to do,
and we can skip this step.
This code only runs when applying the migration to an already
provisioned test db - because during an initial db set up there are no
realms yet, so no events get pushed by the migration.