Floating upwards caused a weird flickering effect if the mouse floated
onto the tooltip's body, and it's still reasonable UI floating left
(and also there's guaranteed to be space).
Fixes#16438.
This commit adds migration which removes default status of exisitng
default private streams, i.e. private stream exists but they are no
longer default.
This commit removes mock.patch with assertLogs().
* Adds return value to do_rest_call() in outgoing_webhook.py, to
support asserting log output in test_outgoing_webhook_system.py.
* Logs are not asserted in test_realm.py because it would require to users
to be queried using users=User.objects.filter(realm=realm) and the order
of resulting queryset varies for each run.
* In test_decorators.py, replacement of mock.patch is not done because
I'm not sure if it's worth the effort to replace it as it's a return
value of a function.
Tweaked by tabbott to set proper mypy types.
This commit moves the wildcard mentions documentation to a top-level page.
Edited by tabbott to deduplicate with the existing docs, and add cross-links.
Then because the ID is now part of the draft dict, we can
(and do) change the structure of the "drafts" parameter
returned from `GET /drafts` from an object (mapping ID to
data) to an array.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Sometimes we don't need to specify the expected_drafts field.
So by removing it, we can reduce the clutter a bit.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Now the timestamp returned in a draft dict will always be an int.
The endpoints will still accept either an int or a float.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This fixes a bug where the autocomplete for topics
deleted all the text content, if the topic jump is used
without entering any text.
The topic typeahead is automatically set up, on entering
the ">" key for stream completions. Therefore there is a
case where the user can select a typeahead item without
entering any text.
Thus the token length will be 0 and `beginning.slice(0, -0)` returns
"" instead of the `beginning` string. The case is only relevant for
"topic_list" completion as we don't set up the typeahead for empty
strings.
Fix this by reverting a hunk of
48f5e5179a, adding a test.
Fixes#16599.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
Our test-backend validation confirms that we don't log anything to
stdout in the tests, so the fact that CI passes with this removes
shows there was nothing being logged.
Refactor test_video_link_compose_clicked into seperate tests for:
No video provider.
Jitsi as the provider.
Zoom as the provider.
BigBlueButton as the provider.
Rename zoom_xhrs to video_call_xhrs.
Rename abort_zoom to abort_video_callbacks.
Delete callbacks from video_call_xhrs when they have been aborted.
Move generation of video_call_id in the .videolink handler into
the Jitsi video call handling block as it is the only place it is
referenced.
Boto3 does not allow setting the endpoint url from
the config file. Thus we create a django setting
variable (`S3_ENDPOINT_URL`) which is passed to
service clients and resources of `boto3.Session`.
We also update the uploads-backend documentation
and remove the config environment variable as now
AWS supports the SIGv4 signature format by default.
And the region name is passed as a parameter instead
of creating a config file for just this value.
Fixes#16246.
The class names need to be renamed even if we are not about to run
puppet ourselves; otherwise, deployments which rely on running puppet
themselves will still have the wrong class names.
These are respected by `urllib`, and thus also `requests`. We set
`HTTP_proxy`, not `HTTP_PROXY`, because the latter is ignored in
situations which might be running under CGI -- in such cases it may be
coming from the `Proxy:` header in the request.
The `no_proxy` parameter does not work to remove proxying[1]; in this
case, since all requests with this adapter are to the internal Tornado
process, explicitly pass in an empty set of proxies to disable
proxying.
[1] https://github.com/psf/requests/issues/4600
Not all of the workers are known to be safe to interrupt; they might
leave inconsistent state. As such, terminating them with timeouts
should currently only be a last-resort against stalled queues, not a
regular occurrence.
Since the exception can be triggered at arbitrary places in the stack
based on whenever the alarm happens to fire, they do not often group
together.
Explicitly group them together, grouped only by which queue the work
is in.