This commit adds new helper can_move_messages_between_streams
which will be used to check whether a user is allowed to move
messages from one stream to another according to value of
'move_messages_between_streams_policy'.
I have updated `tools/run-dev.py` to output the correct subdomain such as
`http://zulip.username.zulipdev.org` so that the user knows the correct
subdomain to access the Zulip Dev realm on.
I have updated the remote development documentations to be more accurate
when it comes to developing on a Zulip Development Droplet to ensure
the user knows to access at `zulip.username.zulipdev.org`.
Since all the message reactions are inserted before the
add reaction button, if it is the first child, we can safely
remove it.
We changed this from `only-child` to be `first-child` because
we append tooltips as siblings of `reaction_button` but since
they are appended, they are always appended after the `reaction_button`.
Thus, if there were tooltips present the reaction_button won't hide.
Current production code uses client_id in the event dict and this test
should be updated to reflect that. Old format event can still be
consumed by the worker, but that is already tested by
WorkerTest.test_UserActivityWorker.
Since c3a8a15bae removed the last
instance of code using the dictionary code path, we actually need to
wait until one can no longer upgrade directly from 4.x to master in
order to avoid breakage should we remove this compatibility code,
since only today did we stop generating the old event format.
The bulk deletion codepath was using dicts instead of user ids in the
event, as opposed to the other codepath which was adjusted to pass just
user ids before. We make the bulk codepath consistent with the other
one. Due to the dict-type events happening in 3.*, we move the goal for
deleting the compat code in process_notification to 5.0.
This was dropped in commit 840cf4b885
(#15091), but commit 1432067959
(#17047) mistakenly reintroduced it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We use the snakeoil TLS certificate for PostgreSQL and Postfix; some
VMs install the `ssl-cert` package but (reasonably) don't build the
snakeoil certs into the image.
Build them as needed.
Fixes#14955.
django.conf.urls.url is actually a deprecated alias of
django.urls.re_path, but we want path instead of re_path.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.encoding.smart_text is a deprecated alias of
django.utils.encoding.smart_str as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.http.is_safe_url is a deprecated alias of
django.utils.http.url_has_allowed_host_and_scheme as of Django 3.0,
and will be removed in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.translation.ugettext is a deprecated alias of
django.utils.translation.gettext as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This reduces the complexity of our dependency graph.
It also makes sub_store.get parallel to message_store.get.
For both you pass in the relevant id to get the
full validated object.
This breaks an indirect dependency of stream_data
on the channel module.
It's a verbatim code move, apart from the one-line
helper `has_history_for`. It's not totally clear
to me why the original code doesn't call into
`is_complete_for_stream_id` to early-exit, but
figuring that out is outside the scope of my
change.
It's possible that we will eventually just subsume
this tiny module into topic_list once we finish
breaking all dependencies, but we may want to
reuse this for something like Recent Topics
or other similar UIs.
It's also possible that we'll want to rename
stream_topic_history -> stream_topic_history_data
sometime soon, possibly after we clean up its
dependency on message_util soon.
I have added a documentation page for the GitHub Actions integration to
`/integrations/doc/github-actions` with a link to the Zulip GitHub
Actions repository.
Tweaked by tabbott to add cross-links with the main GitHub integration.
I have updated any missing bot avatars after running the tool whilst
creating integrations. These needed adding otherwise the tool creates
them whenever it is ran.
This widget only filters the user's subscription -- it's only suggest
public streams that the user is not subscribed to. "Filter" is the
correct label for a widget with this use case.
We only update the `.private_messages_header` here since
unread_counts of `.expanded_private_message` are updated
via `pm_list.update_private_messages`.
This fixes the bug of PMs in `.expanded_private_message` having
the same unread count as `private_messages_header`.
Since we rerender the DOM of `.expanded_private_message` every
time we update unread count of PMs, we don't need to manually
update them here. Also, we always keep them on display since
there is no real need to toggle them. They are not visible
when they have 0 unread counts via `.zero_count`.
While rest of the app has ported to the new system of updating
unread_counts `activity` was not ported. This resulted in
unread count in buddy list not being updated when new
PMs arrive.
The series of commits to consolidate CSS classes
for the various unread-count spans across our app
created a bug where the stream_list.js code's selector
starting capturing the unread spans in topic list items.
Suppose you had a stream with these topics:
Foo 10
a 3
b 3
c 4
If another unread came in, you would briefly see:
Foo 11
a 11
b 11
c 11
Now we just use subscription_block to find the
element that we want to tweak.
I remove a convoluted node test here. Part of the
reason the node test was convoluted was that the
original implementation was overly complex. I will
try to re-introduce a simpler test soon, but this
is a bit of an emergency fix.
This reduces our dependency on message_list code (via
message_util), and it makes moving streams/topics and
deleting messages more performant.
For every single message that was being updated or
deleted, the previous code was basically re-computing
lots of things, including having to iterate through
every message in memory to find the messages matching
your topic.
Now everything basically happens in O(1) time.
The only O(N) computation is that we now lazily
re-compute the max message id every time you need it
for typeahead logic, and then we cache it for
subsequent use. The N here is the number of messages
that the particular sender has sent to the particular
stream/topic combination, so it should always be quite
small, except for certain spammy bots.
Once the max has been calculated, the common operation
of adding a message doesn't invalidate our cached
value. We only invalidate the cache on deletes.
The main change that we make here from a data
standpoint is that we just keep track of all
message_ids for all senders. The storage overhead here
should be negligible. By keeping track of our own
messages, we don't have to punt to other code for
update/delete situations.
There is similar code in recent_topics that I think can
be improved in similar ways, and it would allow us to
eliminate functions like this one:
export function get_messages_in_topic(stream_id, topic) {
return message_list.all
.all_messages()
.filter(
(x) =>
x.type === "stream" &&
x.stream_id === stream_id &&
x.topic.toLowerCase() === topic.toLowerCase(),
);
}