Stop using `access_user_group_by_id` in notifications codepaths, as it
is meant to be used to check for _write_ access, not read
access (which is not limited). In the notification codepaths, there
are no ACLs to apply, and the ID is known-good; just load it
directly. The `for_mention` flag is removed, as it was not used in the
mention codepaths at all, only the notification ones.
get_remote_server_by_uuid (called in validate_api_key) raises
ValidationError when given an invalid UUID due to how Django handles
UUIDField. We don't want that exception and prefer the ordinary
DoesNotExist exception to be raised.
Fix another tidy error caused by 1e4e6a09af23; as also noted in
f9a39b6703, these resources are necessary such that tidy does not
cleanup of smokescreen, and then force a recompilation of it again.
APNs payloads nest the zulip-custom data further than the top level,
as Android notifications do. This led to APNs data silently never
being truncated; this case was not caught in tests because the mocks
provided the wrong data for the APNs structure.
Adjust to look in the appropriate place within the APNs data, and
truncate that.
This replaces the temporary (and testless) fix in
24b1439e93 with a more permanent
fix.
Instead of checking if the user is a bot just before
sending the notifications, we now just don't enqueue
notifications for bots. This is done by sending a list
of bot IDs to the event_queue code, just like other
lists which are used for creating NotificationData objects.
Credit @andersk for the test code in `test_notification_data.py`.
Previously, we suffered a bug where we would not properly condense
messages on first load of CZO (ie after login).
This bug was an unintended consequence of setting recent topics as the
default view, because since the page loads to recent_topics the
message_list is hidden but still gets rendered into the DOM and when
condense_and_collapse runs, it causes get_message_height to cache a
message height of 0, which results in the message not being collapsed.
There may be other ways to trigger the same broken mechanism.
This commit changes the function so we only return 0 but don't cache
the result.
Fixes: #20666.
1e4e6a09af removed the resources for the unpacked directory, on the
argument that they were unnecessary. However, the directory (or file,
see below) that is unpacked must be managed, or it will be tidied on
the next puppet apply.
Add back the resource for `$dir`, but mark it `ensure => present`, to
support tarballs which only unpack to a single file (e.g. wal-g).
As explained in the comments in the code, just doing UUID(string) and
catching ValueError is not enough, because the uuid library sometimes
tries to modify the string to convert it into a valid UUID:
>>> a = '18cedb98-5222-5f34-50a9-fc418e1ba972'
>>> uuid.UUID(a, version=4)
UUID('18cedb98-5222-4f34-90a9-fc418e1ba972')
The existing callsites of this are via `source` or being inline'd into
the startup of a new host; in both of these cases, the surrounding
script is already `set -eu`. However, if run as a standalone tool, it
should also configure itself to catch checksum failures and other
problems.
The old name was confusing, since the contents
of the div aren't just a table, and we have
smaller elements that actually do list a bunch
of subscriptions in tabular format.
Even though we intend to shortly share lots of code
for editing stream subscribers with the create-stream
UI, we don't want to confuse click handlers and
containers too much.
It's kind of silly to cache ListWidgets for subscriber
lists when we only ever update the most recent one.
This will save memory if you are managing a whole bunch
of streams, although I suspect the savings here is
mostly negligible unless you were doing something
crazy.
The main motivation here is just that it simplifies the
code.
Now our click handlers get stream_id directly from
e.target, and then downstream code is no longer
coupled to the event semantics.
Note that we'll probably just know the stream_id
more directly after future commits.
We also remove a little bit of redundant error
handling.
This is a fairly straightforward extraction.
It's good to test this with Iago, and then go into
Manage Streams and add/remove subscribers for a stream
like devel.
I copy/pasted two small functions that will soon
diverge from stream_edit. The get_stream_id function
will either use a module variable (since we're
generally only editing subscribers for one stream, and
we already have the singleton assumption with
`input_pill`) or a more strict CSS selector. And then
get_sub_for_target depends on get_stream_id. We may not
always need full subs, anyway, and when we adapt some
of this code for creating streams, things are likely to
change.
I stopped exporting a couple functions that have no
callers outside of this module.
The main entry point for the module is
enable_subscriber_management.
We continue to export invite_user_to_stream and
remove_user_from_stream, which should possibly be just
pulled into their own module to lessen some
dependencies, but they don't have too much baggage,
since they just wrap channel calls.
This diff looks slightly noisy, but the main chunk of
code that we moved here has the same logic as before,
and it just gets realm_id from MentionBackend now, instead
of having our markdown processor have to supply it.
We basically want MentionData to be the gatekeeper of
mention data, and then we delegate backend tasks to
MentionBackend.
Soon we will add a cache to MentionBacked, which will
justify this change a bit more.
We now make it mandatory to pass in the Realm object.
If this function was ever called with None, I am scared
to know what the expected results were at the time of
writing.
It's slightly annoying to plumb Optional[MentionBackend]
down the stack, but it's a one-time change.
I tried to make the cache code relatively unobtrusive
for the single-message use case.
We should be able to eliminate redundant stream queries
using similar techniques.
I considered caching at the level of rendering the message
itself, but this involves nearly as much plumbing, and
you have to account for the fact that several users on
your realm may have distinct default languages (French,
Spanish, Russian, etc.), so you would not eliminate as
many query hops. Also, if multiple streams were involved,
users would get slightly different messages based on
their prior subscriptions.