This commit adds a 'has_topic_wildcards' instance variable
to the 'MentionData' class for the detection of
- possible topic wildcards mentions.
Fixes part of #22829.
Co-authored-by: Prakhar Pratyush <prakhar841301@gmail.com>
Co-authored-by: orientor <aditya.verma@students.iiit.ac.in>
This commit updates the existing tests in 'test_email_notifications'
and 'test_push_notifications' to properly configure user settings
and visibility policies before running the actual tests.
Earlier, the tests were passing, but the corner case expected
to be covered wasn't covered.
This should have been included in
d80779435a.
These comments should not have been included in
a8fd9eb701.
We covered the case "Private message should soft reactivate
the user" earlier in the test. So the comment was rightly added
there.
During stream wildcard or group mention, no such personal mention
is involved; hence, the comments are not needed.
This is a prep commit to replace 'wildcard' with 'stream_wildcard'.
This wasn't included in 179d5cb because we didn't decide to
use a different rendered_content for topic wildcard mention,
i.e., ''<span class="user-mention topic-mention">{wildcard}</span>'.
Our intention was not to create separate tests for both stream
and topic wildcard mentions, as they were expected to have the
same rendered content format.
Previously this limit was 1 week, which was fine for busy
organizations, but for organizations that send a few messages a week,
or have occasional bursts of activity but the last one was a few weeks
ago, this should give a significantly better new user experience.
There are still caps like 1000 messages total and 20
unread, but we're a bit more flexible about time.
This commit adds a 3px column between the `controls` and `time`
areas, which keeps the controls from crowding the time for
languages with longer time markers.
To make the layout easier to reason about, this includes the
minimum width for the time column as part of the message-box
grid definition.
This includes changing the URL to #settings/preferences, with a
transparent redirect so that existing links, like the one from Welcome
Bot, continue to work.
Previously the flatpickr was always set to show at the top but this
led to it being cut off when the message was at the top of the
screen -- should happen only when someone is editing a message.
Pass the HttpRequest explicitly through the two webhooks that log to
the webhook loggers.
get_current_request is now unused, so remove it (in the same commit
for test coverage reasons).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We use `get_by_user_id` instead of directly accessing the global dict, because
when accessing person objects directly from one of the global dicts we
need callers to check for undefined values, this can be fixed by using
`get_by_user_id` method to get person objects because that functions
makes sure to assert that we indeed have a valid user id, so it will
never return undefined values.
Co-authored-by: Zixuan James Li <p359101898@gmail.com>
This commit improves the edited/moved tippy tooltip to now include a
second italic line: "Click to view history" This line is visible
only when 'realm_allow_edit_history' is true for any organization
settings. Additionally, the first line is changed to display
"Last edited today at 00:00 AM" The date is in lowercase if it
doesn't contain a number for example 'today' unless the first
alphabet is uppercase.
'tippy-zulip-delayed-tooltip' was used as a common class to
implement tippy tooltips in addition to other elements in the
'edited_notice.hbs' file. However, now we need to make some
changes in tippyjs inside the onShow function to decide whether
to show the second line of a tooltip or not. Therefore, we need
to use a unique class for the edited_notice tooltips. Hence, removed
the 'tippy-zulip-delayed-tooltip' class from the edited_notice.hbs
file and used the 'message_edit_notice' class instead.
Fixes: #23075
Users can, quite understandably, assume that upgrading Zulip upgraded
the underlying PostgreSQL version. Though it is mentioned at the top
of the page, mentioning it here clarifies that it is an additional
step.
When a message list view rerenders a locally echoed message the
message recipient header is also rerendered, which then removes
the "sticky_header" class if it was present. If rerendering the
message triggers a non-user initiated scroll event, then the
"sticky_header" class is updated.
But it is possible that the rerendering of the message will not
trigger a scroll event, which means the recipient header is no
longer updated and the next calculation for the message list
view's _scroll_limit for the top of the feed will not include the
sticky header and the currently selected message may be scrolled
partially or completely under the message header recipient bar.
In message_list_view.rerender_messages, adds a check, after calling
_rerender_header in a loop, for the current message list and calls
update_sticky_recipient_headers if the message feed is visible.
Adds a comment to _rerender_header that we expect it to only be
called in rerender_messages so that the "sticky_header" class is
updated if needed.
The initial followup_day1 email confirms that the new user account
has been successfully created and should be sent to the user
independently of an organization's setting for send_welcome_emails.
Here we separate out the followup_day1 email into a separate function
from enqueue_welcome_emails and create a helper function for setting
the shared welcome email sender information.
The followup_day1 email is still a scheduled email so that the initial
account creation and log-in process for the user remains unchanged.
Fixes#25268.
The followup_day2 email is scheduled with a delay as a welcome email
and is therefore more likely to exist as a scheduled email in these
deactivation cases.
Updates comment to not include the number of emails generated so
that it doesn't need to be updated every time a new email is added.
The current count in the comment is already out-of-date.
Because the third party might not be expecting a 400 from our
webhooks, we now instead use 200 status code for unknown events,
while sending back the error to Sentry. Because it is no longer an error
response, the response type should now be "success".
Fixes#24721.
This commit removes "@" from name of role-based system groups
since we have added a restricion on having user group names
starting with "@" in the previous commit as they look odd in
mention syntax.
We also add a migration in this commit to update the name of
role-based system groups in existing realms to remove "@"
from the name. This migration also updates the names of
non-system user groups by removing the invalid prefixes
from their names and if there is a group already with that
name, we insted name the group as "group:{group_id}".
Fixes#26148.
We do not allow user group names to start with "@", "role:",
"user:", "stream:" and "channel:".
Group names starting with "@" look odd in mentions and
"role:", "user:" and "stream:" prefixes are reserved for
system groups which will be used in the new groups-based
permission model. We do not allow "channel:" prefix for
now just to be safe in a case where we use it instead of
"stream:" prefix for stream based groups in future.
Fixes part of #26148.
Previously we had database level restriction on length of
user group names. Now we add the same restriction to API
level as well, so we can return a better error response.
We remove the cache functionality for the
get_realm_stream function, and we also change it to
return a thin Stream object (instead of calling
select_related with no arguments).
The main goal here is to remove code complexity, as we
have been prone to at least one caching validation bug
related to how Realm and UserGroup interact. That
particular bug was more theoretical than practical in
terms of its impact, to be clear.
Even if we were to be perfectly disciplined about only
caching thin stream objects and always making sure to
delete cache entries when stream data changed, we would
still be prone to ugly situations like having
transactions get rolled back before we delete the cache
entry. The do_deactivate_stream is a perfect example of
where we have to consider the best time to unset the
cache. If you unset it too early, then you are prone to
races where somebody else churns the cache right before
you update the database. If you set it too late, then
you can have an invalid entry after a rollback or
deadlock situation. If you just eliminate the cache as
a moving part, that whole debate is moot.
As the lack of test changes here indicates, we rarely
fetch streams by name any more in critical sections of
our code.
The one place where we fetch by name is in loading the
home page, but that is **only** when you specify a
stream name. And, of course, that only causes about an
extra millisecond of time.
This changes bulk_get_streams so that it just uses the
database all the time. Also, we avoid calling
select_related(), so that we just get back thin and
tidy Stream objects with simple queries.
About not caching any more:
It's actually pretty rare that we fetch streams by name
in the main application. It's usually API requests that
send in stream names to find more info about streams.
It also turns out that for large queries (>= ~30 rows
for my measurements) it's more efficent to hit the
database than memcached. The database is super fast at
scale; it's just the startup cost of having Django
construct the query, and then having the database do
query planning or whatever, that slows us down. I don't
know the exact bottleneck, but you can clearly measure
that one-row queries are slow (on the order of a full
millisecond or so) but the marginal cost of additional
rows is minimal assuming you have a decent index (20
microseconds per row on my droplet).
All the query-count changes in the tests revolve around
unsubscribing somebody from a stream, and that's a
particularly odd use case for bulk_get_streams, since
you generally unsubscribe from a single stream at a
time. If there are some use cases where you do want to
unsubscribe from multiple streams, we should move
toward passing in stream ids, at least from the
application. And even if we don't do that, our cost for
most queries is a couple milliseconds.
We want to avoid Django going back to the database to
get a realm object that the caller already has.
It's actually currently the case that we often
pre-fetch realm objects when we get stream objects
using get_stream (using a call to select_related() with
no arguments), but that is an expensive operation that
we want to avoid going forward.
This commit prepares us to just fetch slim objects.