Now we can remove `user-avatar-block` id and add common class `image_block`.
we can access this class using `#user-avatar-upload-widget .image_block`
so that we can have only one id at top-level and 'image_upload_widget.hbs`
can be more dynamic so we can use for other similar widgets also.
Now we can remove the id `avatar-spinner-background` and access spinner
element from `#user-avatar-upload-widget .image_upload_spinner` so
that we can have only one id at top-level and 'image_upload_widget.hbs` can
be more dynamic so we can use for other similar widgets also.
Now we can remove the id `avatar-spinner-background` and access spinner
element from `#user-avatar-upload-widget .settings-page-upload-text` so
that we can have only one id at top-level and 'image_upload_widget.hbs` can
be more dynamic so we can use for other similar widgets also.
The upload text element is wrongly named as id=user_avatar_upload_button.
now we can remove that id and access upload text element from
`#user-avatar-upload-widget .settings-page-upload-text` so that we
can have only one id at top-level and 'image_upload_widget.hbs` can
be more dynamic so we can use for other similar widgets also.
We can remove id="user_avatar_delete" and access delete-text from
`#user-avatar-upload-widget .settings-page-delete-text` so that
we can have only one id at top-level and 'image_upload_widget.hbs`
can be more dynamic so we can use for other similar widgets also.
we can remove `user_avatar_delete_button` id and access delete button
from `#user-avatar-upload-widget .settings-page-delete-button` so that
we can have only one id at top level and 'image_upload_widget.hbs`
can be more dynamic so we can use for other similar widgets also.
Renaming "user-settings-avatar" to "image_upload_button" since the
`user-settings-avatar` name is irrelevant/confusing for the upload
button, and converting the id into a class so that we could just have
only one outer id.
We can check for the `is_editable_by_current_user` condition once in
the upper level instead of checking twice in middle for the same
conditions and to match the implementation of style realm icon and
realm logo since similar implementation between avatar, logo, the icon
will help us to use `image_upload_widget.hbs` for logo and icon
widgets also.
This likely fixes a bug with the delete text being shown incorrectly
for non-administrator users.
We extract image_upload_widget.hbs from user avatar upload widget.
The plan is to the same HTML template for all 4 widgets (user avatar,
icon, day logo, night logo) across the two settings UIs and different
image upload widgets as possible in future.
This breaks i18n; we'll fix it in follow-up work.
This changes the user avatar image display implementation to more
closely match how the realm icon and realm logo image features are
structured. This is early preparatory work towards sharing this code
between the various widgets.
This adds a new function `get_apns_badge_count()` to
fetch count value for a user push notification and
then sends that value with the APNs payload.
Once a message is read from the web app, the count is
decremented accordingly and a push notification with
`event: remove` is sent to the iOS clients.
Fixes#10271.
Mocking `get_base_payload()` verifies the wrong output
when the code is actually correct. So, its better that
we call the real function here, especially when we are
adding the Apple case.
The previous commit introduced a bug where it was not intuitive
for the user to scroll again.
For the current narrow, new messages were fetched again only when
scrolled to the bottom as usually there are many messages displayed.
However when the edge case mentioned in the previous commit
occured, it was not very obvious that a scroll should be done
or we could already be at the bottom and could not scroll again
to trigger a fetch.
`message_viewport.at_bottom` has a relevant comment explaining
this behaviour.
The previous commit handled the rare race condition. However,
there is a possibility that the rare race condition might occur
again while we are handling the previous condition.
This commit resolves these 2 problems by performing a re-fetch
while also resetting the `expected_max_message_id` and this
approach has two benefits:
1. The reset prevents an infinite loop, if somehow the expected
max message's id gets corrupted resulting in a situation
where the server can never send an id greater than that even
after fetching.
2. Even though we stop after just one re-fetch the race condition
might recursively occur while we handle the previous race
condition. And even though the reset prevents multiple re-fetches,
we don't have the missing message problem.
This is because we treat the next race condition as a new race
condition instead of it being a continuation of the previous.
The `expected_max_message_id` gets updated again, on receiving
a new message. Thus it can again enter the `fetch_status` block
as the reset value is updated again.
If a user sends a message while the latest batch of
messages are being fetched, the new message recieved
from `server_events` gets displayed temporarily out of
order (just after the the current batch of messages)
for the current narrow.
We could just discard the new message events if we havent
recieved the last message i.e. when `found_newest` = False,
since we would recieve them on furthur fetching of that
narrow.
But this would create another bug where the new messages
sent while fetching the last batch of messages would not
get rendered. Because, `found_newest` = True and we would
no longer fetch messages for that narrow, thus the new
messages would not get fetched and are also discarded from
the events codepath.
Thus to resolve both these bugs we use the following approach:
* We do not add the new batch of messages for the current narrow
while `has_found_newest` = False.
* We store the latest message id which should be displayed at the
bottom of the narrow in `fetch_status`.
* Ideally `expected_max_message_id`'s value should be equal to the
last item's id in `MessageListData`.
* So the messages received while `has_found_newest` = False,
will be fetched later and also the `expected_max_message_id`
value gets updated.
* And after fetching the last batch where `has_found_newest` = True,
we would again fetch messages if the `expected_max_message_id` is
greater than the last message's id found on fetching by refusing to
update the server provided `has_found_newest` = True in `fetch_status`.
Another benefit of not discarding the events is that the
message gets processed not rendered i.e. we still get desktop
notifications and unread count updates.
Fixes#14017
This line was effectively hardcoding a specific stream_post_policy,
overriding the value already present in the event, to no purpose.
(I believe it got here via cargo-culting induced by #13787.)
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
The automated tests running in CircleCI don't actually use the `zulip`
db, so we can skip running migrations on it in some CircleCI shards to
save time.
NOTE: This only effects build jobs that run provision, except the
`production-build` job where we skip building the dbs altogether.
Migrations still run on `focal-backend` build job to ensure
we are testing all our development setup code.
We refactor these 2 notices to match with the loading indicators,
thus they have been moved to `message_scroll.js`.
After a successful message fetch, we have logic to decide whether
we want to display the notices and also whether we want to hide
the loading indicators (which are already displayed).
We also conservatively hide the notices similar to the indicators
every time we narrow.
The only exception is that we show the history limit notice on
deactivating the narrow (visiting `home_msg_list`).
Since on narrowing we call `load_messages_for_narrow`,
which fetches both top and bottom messages, two loading
indicators were temporarily displayed.
This was also the case for the `home_msg_list` when we
call `mesage_fetch.initialize` on startup.
To resolve this we do not display the bottom loading
indicator (for new messages), if the older messages
are being fetched too. This is only for the initial
narrow change, and the bottom loading indicator will
be displayed correctly when the user is at the bottom.
This fixes a regression introduced when we added bottom loading
indicators at all, which was temporarily reverted in
67053ff479 before being restored in the
last couple commits.
This commit makes the `loading_older_messages_indicator` similar
to the `loading_newer_messages_indicator`.
Now all the decisions about whether to show a loading indicator
will be made from the `fetch_status` API. We still hide the
indicators everytime the view is changed, as explained in the
previous commit.
As explained in 67053ff479,
multiple message fetches may be taking place at the same time.
So some other narrows / the home message list's indicator might
get shown for the current narrow.
This commit moves the updation of the indicators display logic
to the `fetch_status` API.
Now the `loading_newer_messages_indicator` gets displayed along
with the `loading_newer` = true updation for that narrow's message
list, i.e. just before we send the API request. But only if the
message list we are fetching matches with our current message list.
The same indicator is hidden similarly, along with the
`loading_older` = false updation for that narrow's message list,
i.e. just after the success response is recieved. But only if
the message list whose data we recieved matches with our current
message list.
Also the indicators are hidden everytime we activate narrow
or deactivate narrow (`home_msg_list`). And on entering
`narrow.activate` we fetch for it's messages so they get
displayed again, if need be.
This is the reason `message_scroll.hide_indicators();` was
moved to a location above `fetch_messages`.
Fixes#15374.
This is a prep commit. Running terminate-psql-sessions command on
docker-zulip results in the script exiting with non-zero exit status
2. This is because the current session also gets terminated while
running terminate-psql-sessions command. To prevent that from happening
we don't terminate the session created by terminate-psql-sessions.
This commit actually just deletes the `get_default_suggestion`
function while the `get_default_suggestion_legacy` function's
logic remains the exact same, it is just renamed.
Since the operator can never be undefined as mentioned in the
previous commit, we do not require the check with undefined
and as a result the, `if (suggestion)` condition can be removed.
The 3 changes made here are as follows:
* The if block for both the functions is simplified as
`Filter.parse` will always return an array and also
[].slice(0, -1) === [] is true.
* The code where `base_operators` is declared is moved
to just before where it is actually used.
* The `base` variable declaration is changed to match
the pattern of that present in the non-legacy function.
Its value remains the same.
This is a prep commit for when we want to merge the
`get_search_result_legacy` and `get_search_result` functions.
This is the exact same bug as observed in
02ab48a61e.
The bug is in the way we invoke `Filter.parse`.
`Filter.parse` returns a list of operators which
can contain only one 'search' term at max.
All strings with the 'search' operator present
in the query are combined to form this 'search'
term.
However on concatenating two filters we may get
two terms containing the 'search' operator. This
will lead to the search suggestions getting
generated based on only the last 'search' operator
term instead of all the terms having the 'search'
operator.
This is evident from the test change as suggestions
should be based on "s stream:of" but instead they
were based on just the latest query.
Since 9e8f1aacb3, zulip_ops machines
might have two Package declarations for `certbot`, which doesn't work
in puppet.
The fix is, as usual, to use our `zulip::safepackage` wrapper instead.
This likely fix a bug that can leak thousands of messages into the
invalid state where:
* user_message.flags.active_mobile_push_notification is True
* user_message.flags.read is True
which is intended to be impossible except during the transient process
between marking messages as read sending the "remove push
notifications" event.
The bug is that if a user who is declaring bankruptcy with 10,000s of
unreads ends up having the database query to mark all of those as read
take 60s, the Django/uwsgi request will time out and kill the process.
If the postgres transaction still completes, we'll end up with the
second half of this function never being run.
A safer ordering is to do the smaller queries first.
We do this in a loop for correctness in the unlikely event there are
more than 10,000 of these.
In commit 35c8dcb599 we introduced the
`_stream_params` object within filter.js but we didn't correctly
handle cases where `_stream param`s is undefined within `get_title()`,
`generate_url()` and `get_icon()`, which cause the navbar to if eg a
guest user tries to access a stream they weren't subscribed to.
This commit fixes this by:
* Adding the relevant checks
* Adding node tests that include non-existent streams.
* Adds the 'question-circle-o' icon for non-existent stream narrows.
A side note here is that "non-existent streams" fall under
"common narrows" as per our current definitions, which doesn't really
make sense but shouldn't bother us.
Fixes: #15387.
This commit adds translation tags to a few user facing strings which
weren't translated prior:
- "Unknown streams" text and description.
- "All messages" heading.
- Tooltip text for precise count of subscribed users.
The numeric count itself is not translated, because we do not do
similar anywhere else in the UI.
The style guide for Zulip is to always use "primary" and "replica"
when describing database replication. Adjust a few comments under
`puppet/` that do not adhere to this.
Unfortunately, some references still remain to the insensitive and
inaccurate "master" / "slave" terminology. However, these are only in
files which we are attempting to preserve as close to the upstream
versions they are derived from (e.g. postgresql.conf,
postfix/master.cf).
65774e1c4f switched from using the bundled check_postgres.pl to using
the version from packages; the file itself remained, however.
Remove it, and clean up references to it.
Fixes#15389.
Instead of SSH'ing around to them, run directly on the database hosts.
This means that the replicas do not know how many bytes behind they
are in _receiving_ the wall logs; thus, the monitoring also extends to
the primary database, which knows that information for each replica.
This also allows for detecting when there are too few active replicas.
We simply pass the visible message ids to remove_and_rerender
which supports bulk delete operation.
This helps us avoid deleting messages in a loop which freezes the
UI for the duration of the loop.