Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
This commit adds message retention policy details in the subscription_type
text below the stream description.
We do not show any text when realm-level settings is set to forever and
stream-level is set to either forever or realm_default.
This commit adds frontend support for setting and updating message
retention days of a stream from stream settings.
Message retention days can be changed from stream privacy modal of the
stream and can be set from stream_creation_form while creating streams.
Only admins can create streams with message_retention_days value other
than realm_default.
This commit also contains relevant changes to docs.
Previously, we had implemented:
<span class="timestamp" data-timestamp="unix time">Original text</span>
The new syntax is:
<time timestamp="ISO 8601 string">Original text</time>
<span class="timestamp-error">Invalid time format: Original text</span>
Since python and JS interpretations of the ISO format are very
slightly different, we force both of them to drop milliseconds
and use 'Z' instead of '+00:00' to represent that the string is
in UTC. The resultant strings look like: 2011-04-11T10:20:30Z.
Fixes#15431.
Since we are no longer using the "pointer" value sent in
page_params.pointer for anything, there's no value in continuing to
send it from the server to the client.
The remaining code in pointer.js is logic managing state for the
currently selected message.
The stream_events tests were kinda messy, but
I mostly just consolidated a few sections of
code so that we didn't have to keep
re-stubbing the same functions.
For the actual code, I extracted add_sidebar_row
and then removed the unnecessarily complicated
jQuery trigger mechanisms.
This merges the `exports.get_search_result_legacy` and
`exports.get_search_result` function.
The key differences between the two code paths are as follows:
* We only want to generate suggestions for the queries which
the user is typing or can edit.
For the legacy version, suggestions are displayed for the
entire search string in the searchbox. (`all_operators`)
For the pills enabled version, suggestions are displayed
only for the input which hasn't been converted to pills.
(`query_operators`)
`all_operators` = `base_query_operators` + " " + `query_operators`.
trim is added at the end just to handle the legacy case
where we pass the `base_query` as ''.
* It is not possible to detect whether the user wants to
continue typing in the legacy version. However if the
the searchbox is still focused even after pill creation
we can assume the user still wants to continue typing.
To handle this we push an empty term as the `last` operator.
This is possible since the previous queries have been
completely entered as evident from it's generated pill.
* When using the legacy version, `search_operators` are
the same as `all_operators`, as mentioned in point 1.
In the pills enabled version we perform most of the
computations from the `query_operators`, but we do
require all `all_operators`, only for filtering the last
query's suggestion.
* And there is just one block unique to the legacy search
system. More details are mentioned in the comments of that
block.
We also refactor both the search suggestions node tests,
mainly to make them similar and easier to detect differences
when we switch over to the new version.
This fixes one of our oldest important user experience issues, namely
that if you never visit the home view, the Zulip webapp would often
load "deep in the past" because the pointer had not advanced.
Fixes#1529.
When fetching older/new messages, we used to resort to the pointer
to act as anchor when message list was empty.
This appears to be an impossible case, as
`fetch_status.can_load_newer_messages`
should be false in this case and user cannot be scrolling an
empty message_list in the first case.
Hence, we raise a fatal error to inform user of the same.
Now we can use common HTML image upload widget template
`image_upload_widget.hbs` for realm day/night logo and
we should access those day/night logo elements using
e.g., "#realm-day/night-logo-upload-widget .realm-logo-elements".
since we use image_upload_widget.hbs for realm day/night logo upload
widget we need to extract CSS for realm day/night logo and
place them separately under `#realm-day-logo-upload-widget`
and `#realm-day-logo-upload-widget` css id.
Now we can use common HTML image upload widget template
`image_upload_widget.hbs` for realm icon. we can access icon
element using "#realm-icon-upload-widget .realm-icon-elements".
also we need to extract CSS for realm icon and place them
separately under `#realm-icon-upload-widget` css id.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
This adds support for a "spoiler" syntax in Zulip's markdown, which
can be used to hide content that one doesn't want to be immediately
visible without a click.
We use our own spoiler block syntax inspired by Zulip's existing quote
and math block markdown extensions, rather than requiring a token on
every line, as is present in some other markdown spoiler
implementations.
Fixes#5802.
Co-authored-by: Dylan Nugent <dylnuge@gmail.com>
The upload text element is wrongly named as id=user_avatar_upload_button.
now we can remove that id and access upload text element from
`#user-avatar-upload-widget .settings-page-upload-text` so that we
can have only one id at top-level and 'image_upload_widget.hbs` can
be more dynamic so we can use for other similar widgets also.
we can remove `user_avatar_delete_button` id and access delete button
from `#user-avatar-upload-widget .settings-page-delete-button` so that
we can have only one id at top level and 'image_upload_widget.hbs`
can be more dynamic so we can use for other similar widgets also.
The previous commit introduced a bug where it was not intuitive
for the user to scroll again.
For the current narrow, new messages were fetched again only when
scrolled to the bottom as usually there are many messages displayed.
However when the edge case mentioned in the previous commit
occured, it was not very obvious that a scroll should be done
or we could already be at the bottom and could not scroll again
to trigger a fetch.
`message_viewport.at_bottom` has a relevant comment explaining
this behaviour.
The previous commit handled the rare race condition. However,
there is a possibility that the rare race condition might occur
again while we are handling the previous condition.
This commit resolves these 2 problems by performing a re-fetch
while also resetting the `expected_max_message_id` and this
approach has two benefits:
1. The reset prevents an infinite loop, if somehow the expected
max message's id gets corrupted resulting in a situation
where the server can never send an id greater than that even
after fetching.
2. Even though we stop after just one re-fetch the race condition
might recursively occur while we handle the previous race
condition. And even though the reset prevents multiple re-fetches,
we don't have the missing message problem.
This is because we treat the next race condition as a new race
condition instead of it being a continuation of the previous.
The `expected_max_message_id` gets updated again, on receiving
a new message. Thus it can again enter the `fetch_status` block
as the reset value is updated again.
If a user sends a message while the latest batch of
messages are being fetched, the new message recieved
from `server_events` gets displayed temporarily out of
order (just after the the current batch of messages)
for the current narrow.
We could just discard the new message events if we havent
recieved the last message i.e. when `found_newest` = False,
since we would recieve them on furthur fetching of that
narrow.
But this would create another bug where the new messages
sent while fetching the last batch of messages would not
get rendered. Because, `found_newest` = True and we would
no longer fetch messages for that narrow, thus the new
messages would not get fetched and are also discarded from
the events codepath.
Thus to resolve both these bugs we use the following approach:
* We do not add the new batch of messages for the current narrow
while `has_found_newest` = False.
* We store the latest message id which should be displayed at the
bottom of the narrow in `fetch_status`.
* Ideally `expected_max_message_id`'s value should be equal to the
last item's id in `MessageListData`.
* So the messages received while `has_found_newest` = False,
will be fetched later and also the `expected_max_message_id`
value gets updated.
* And after fetching the last batch where `has_found_newest` = True,
we would again fetch messages if the `expected_max_message_id` is
greater than the last message's id found on fetching by refusing to
update the server provided `has_found_newest` = True in `fetch_status`.
Another benefit of not discarding the events is that the
message gets processed not rendered i.e. we still get desktop
notifications and unread count updates.
Fixes#14017
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
We refactor these 2 notices to match with the loading indicators,
thus they have been moved to `message_scroll.js`.
After a successful message fetch, we have logic to decide whether
we want to display the notices and also whether we want to hide
the loading indicators (which are already displayed).
We also conservatively hide the notices similar to the indicators
every time we narrow.
The only exception is that we show the history limit notice on
deactivating the narrow (visiting `home_msg_list`).
This commit makes the `loading_older_messages_indicator` similar
to the `loading_newer_messages_indicator`.
Now all the decisions about whether to show a loading indicator
will be made from the `fetch_status` API. We still hide the
indicators everytime the view is changed, as explained in the
previous commit.
As explained in 67053ff479,
multiple message fetches may be taking place at the same time.
So some other narrows / the home message list's indicator might
get shown for the current narrow.
This commit moves the updation of the indicators display logic
to the `fetch_status` API.
Now the `loading_newer_messages_indicator` gets displayed along
with the `loading_newer` = true updation for that narrow's message
list, i.e. just before we send the API request. But only if the
message list we are fetching matches with our current message list.
The same indicator is hidden similarly, along with the
`loading_older` = false updation for that narrow's message list,
i.e. just after the success response is recieved. But only if
the message list whose data we recieved matches with our current
message list.
Also the indicators are hidden everytime we activate narrow
or deactivate narrow (`home_msg_list`). And on entering
`narrow.activate` we fetch for it's messages so they get
displayed again, if need be.
This is the reason `message_scroll.hide_indicators();` was
moved to a location above `fetch_messages`.
Fixes#15374.
This is the exact same bug as observed in
02ab48a61e.
The bug is in the way we invoke `Filter.parse`.
`Filter.parse` returns a list of operators which
can contain only one 'search' term at max.
All strings with the 'search' operator present
in the query are combined to form this 'search'
term.
However on concatenating two filters we may get
two terms containing the 'search' operator. This
will lead to the search suggestions getting
generated based on only the last 'search' operator
term instead of all the terms having the 'search'
operator.
This is evident from the test change as suggestions
should be based on "s stream:of" but instead they
were based on just the latest query.
In commit 35c8dcb599 we introduced the
`_stream_params` object within filter.js but we didn't correctly
handle cases where `_stream param`s is undefined within `get_title()`,
`generate_url()` and `get_icon()`, which cause the navbar to if eg a
guest user tries to access a stream they weren't subscribed to.
This commit fixes this by:
* Adding the relevant checks
* Adding node tests that include non-existent streams.
* Adds the 'question-circle-o' icon for non-existent stream narrows.
A side note here is that "non-existent streams" fall under
"common narrows" as per our current definitions, which doesn't really
make sense but shouldn't bother us.
Fixes: #15387.
We simply pass the visible message ids to remove_and_rerender
which supports bulk delete operation.
This helps us avoid deleting messages in a loop which freezes the
UI for the duration of the loop.
Fixes#15285
This event will be used more now for guest users when moving
topic between streams (See #15277). So, instead of deleting
messages in the topic as part of different events which is
very slow and a bad UX, we now handle the messages to delete in
bulk which is a much better UX.
This commit adds the option of owner role in user role dropdown
and also takes care of the restrictions while adding/removing
owner status of the user.
This commit also handles the places where we dispaly role of
the user in UI.
The get_rendered_messages now returns a Map, that is formatted as:
{
'Verona > topic 1': ['message 1', 'message 2'],
'You and Cordelia Lear': ['private message 1']
}
We also add check_messages_sent (which was expected_messages in
casper). It now takes in a object, formatted similar to
get_rendered_messages above, instead of an array with topic and
messages.
We include all method needed from casper except
get_rendered_messages, and expected_messages. We will want to tweak
their API when we migrate them.
We also rename bunch of method here while migrating them.
The changes made here are as follows:
* We rename `show_history_limit_message` and `hide_history_limit_message`
to `show_history_limit_notice` and `hide_history_limit_notice`
respectively.
* We rename `hide_or_show_history_limit_message` to
`update_top_of_narrow_notices` as now this function is responsible
for updating the history limit notice as well the end of results
notice.
* We extract 2 functions responsible for hiding and showing the end
of results notice, similar to that of the history limit notice.
All instances of `$(".all-messages-search-caution").hide();` are
replaced with the call of `hide_end_of_results_notice` function.
The streams:all advertisement notice in search should only appear
after all results have been fetched to indicate we've gotten to the
beginning of the target feed.
The notice gets hidden at the start of `narrow.activate` and is
shown just after we've fetched an older batch of messages if the
"oldest" message has been found.
Previously it would get displayed after the first fetch which
takes place from `narrow.activate`. Thus we move this logic to
`notifications.hide_or_show_history_limit_message` which gets
called after a successful message fetch.
Since the home message view contains all the messages we are not
required to display this notice. However if it is already shown
we hide it as a part of `handle_post_narrow_deactivate_processes`.
To accomplish this we need to add `has_found_oldest` key to the
`fetch_status` API.
We also removed the `pre_scroll_cont` parameter as this was it's
only use case and is now redundant.
Apparently iamcal/emoji-data has a dedicated category for flag emojis.
And get_all_emoji_categories() in emoji_picker.js doesn't return the
Flags category, because we haven't declared that category in our emoji
data logic.
Note that the category looks quite sparse because it lacks country
flags, since we don't yet support emojis combined with a Zero Width
Joiner (ZWJ) (see #992 & #11767).
Fixes#15303.
This deduplication helps with readability.
Pass get_topic_key in recent_topic_row instead of
computing it in DOM.
Fix broken test_update_unread_count
after this change. This was a regression
which went unnoticed.
We now use the real implementation of
`stream_data` in our tests (as well as `people`),
while still just checking stubs for "heavier"
functions that we dispatch to.
Also, we use our regular blueslip helpers.
This is mostly a code move. There is a bit of
boilerplate at the top, and I just use
`assert.deepEqual` instead of `assert_same`.
I also use a little wrapper to provide
output like this:
Starting node tests...
running tests for dispatch_subs
test: add
test: peer add/remove
test: remove
test: update
test: add error handling
test: peer event error handling
One little piece of code that was obsolete
simply got deleted, not moved.
One of the goals here is to un-stub the
stream_data layer.
We extract stream_edit.rerender to make
the live-update code easier to follow.
The function should eventually be inlined,
but I want to clean up some other stuff first.
These are basically shims for some deeper refactorings.
I basically just try to make the code express the
problems more clearly:
- use stream_name instead of sub
- make early-exit more explicit
- make it clear that add_subscriber needlessly
requires a name
- make it clear we have an unnecessary loop
I also fixed some phony data in the test.
We are trying to phase out the trigger-event way
of telling modules to do something.
In this case we not only remove the indirection
of the event handler, but we also get to remove
`compose_fade` from the `ui_init` startup sequence.
This also has us update `compose_fade` outside
the loop, although that's only a theoretical
improvement, since I don't think `peer_add` events
every actually include multiple streams.
To make the dispatch tests a little flatter, I
added a one-line change to zjsunit to add
`make_stub` to `global`.
To manually test:
* have Aaron reply to Denmark (keep compose box open)
* have Iago add Hamlet to Denmark
* have Hamlet unsubscribe
The discard button stub in createSaveButtons is set same
as that of save button. This isn't desired because it hides
the save button on running `change_save_button_sate` instead
of hiding discard button. It was previously working as we weren't
testing visibility of save button anywhere.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>