Previously the sender was not included in display_recipient when
a private message was locally echoed. This broke the copy conversation
link functionality, if the user try to copy the link immedeatly after
sending the message. This issue is present only during local echo.
This was fixed by including the recipient of the user during
local echo.
Fixes#13547.
Edited the warning to clearly state that most members/most stream members
will be notified on using wildcard mentions, along with the specific
mention (e.g. @ALL, @everyone and @stream).
Did a separate check for all wildcard mentions in util.js and stored the
corresponding mention in wildcard_mention inside compose.js.
Fixes: #13636
This change is in series of de-duplication of code in "Other permission"
section for various dropdowns.
Here rather than using "by_anyone" and "disabled" for the `value` attribute
of options, we use actual numeric values. As a result, we don't need to
manually handle to extract the data to be sent to the backend on saving.
This change is in series of de-duplication of code in "Other permission"
section for various dropdowns.
Here rather than using "by_admins_only" and "by_admins_only" for `value`
attribute of options, we use actual numeric values. This helps in
de-duplicating lot of code which is vulnerable to bugs.
For few settings like `waiting_period_threshold` it makes sense to have the
"value" attribute of option to have a value other than the actual setting
value because multiple settings are depending upon this dropdown, so
handling them in JS code makes more sense. But for many settings (which has
integer values), we have followed a wrong trend over the time of
representing every new dropdown with human-readable values and manually
handling them in JS Code, where it makes more sense to use actual setting
value. The result of which is code has become less concise, sensible and
less likely to be mistaken.
This is a preliminary commit for upcoming change where we will use
"bot_creation_policy_values" like approach for many other settings where
dropdown represents the only single setting of integer type.
We now use vdom-ish techniques to track the
list items for the pm list. When we go to update
the list, we only re-render nodes whose data
has changed, with two exceptions:
- Obviously, the first time we do a full render.
- If the keys for the items have changed (i.e.
a new node has come in or the order has changed),
we just re-render the whole list.
If the keys are the same since the last re-render, we
only re-render individual items if their data has
changed.
Most of the new code is in these two modules:
- pm_list_dom.js
- vdom.js
We remove all of the code in pm_list.js that is
related to updating DOM with unread counts.
For presence updates, we are now *never*
re-rendering the whole list, since presence
updates only change individual line items and
don't affect the keys. Instead, we just update
any changed elements in place.
The main thing that makes this all work is the
`update` method in `vdom`, which is totally generic
and essentially does a few simple jobs:
- detect if keys are different
- just render the whole ul as needed
- for items that change, do the appropriate
jQuery to update the item in place
Note that this code seems to play nice with simplebar.
Also, this code continues to use templates to render
the individual list items.
FWIW this code isn't radically different than list_render,
but it's got some key differences:
- There are fewer bells and whistles in this code.
Some of the stuff that list_render does is overkill
for the PM list.
- This code detects data changes.
Note that the vdom scheme is agnostic about templates;
it simply requires the child nodes to provide a render
method. (This is similar to list_render, which is also
technically agnostic about rendering, but which also
does use templates in most cases.)
These fixes are somewhat related to #13605, but we
haven't gotten a solid repro on that issue, and
the scrolling issues there may be orthogonal to the
redraws. But having fewer moving parts here should
help, and we won't get the rug pulled out from under
us on every presence update.
There are two possible extensions to this that are
somewhat overlapping in nature, but can be done
one a time.
* We can do a deeper vdom approach here that
gets us away from templates, and just have
nodes write to an AST. I have this on another
branch, but it might be overkill.
* We can avoid some redraws by detecting where
keys are moving up and down. I'm not completely
sure we need it for the PM list.
If this gets merged, we may want to try similar
things for the stream list, which also does a fairly
complicated mixture of big-hammer re-renders and
surgical updates-in-place (with custom code).
BTW we have 100% line coverage for vdom.js.
We mostly needed this for Casper tests, and that
usage was eliminated in the prior commit.
There was also some strange defensive code from
ecc42bc9f8 that
is really ancient and which I am eliminating:
const email = row.attr("data-email");
if ($("#deactivation_user_modal .email").html() !== email) {
blueslip.error("User deactivation canceled due to non-matching fields.");
ui_report.message(i18n.t("Deactivation encountered an error. Please reload and try again."),
$("#home-error"), 'alert-error');
}
If the code was there to protect against live
updates for email changes, then we no longer
have to worry about that, since we use user_ids
now as keys.
Or it might have to do with some ancient bug
where you could pop open two modals at once
or something. You can actually change users while
the modal is open (which is kinda strange, but ok),
and it works fine.
When testing this, I ran into the glitch that we
don't open redraw the Deactivated Users panel after
going into the User panel and deactivating a user.
Now that we have the type situation of having anchor support passing a
string, this is a much more natural way to implement
use_first_unread_anchor.
We still support the old interface to avoid breaking compatibility
with legacy versions of the mobile apps.
This makes the code more readable, by just passing the anchor through
without changing its field name back and forth.
There's no reason for this parameter to involve parsing and integer --
it should be a number in all incoming code paths.
The feature is used for editing stream descriptions as well, and in
any case, what's important is that it's a content-editable widget (aka
a form of input box).
This fix recently went on master, although it
hasn't actually been deployed yet (not even to czo),
so user impact should be zero:
0fa67c84d8
The fix mostly improved things, but it broke the
logic for pill containers. The symptom was that
if you tried to autocomplete "Cordelia" in the
pill box we'd instead invoke the "c" hotkey and
try to compose to a stream.
In templates we determine checkboxes are disabled by using the following
`if` clause,
```
{{#if (or (and is_muted notification_setting) realm_setting_disabled)}}
disabled="disabled"
{{/if}}
```
and it is more intuitive to do such calculation in javascript code, so we
added an `if_disabled` attribute in `settings` context which replaces
logical operations from `if` statement.
So for non-notification settings, it is
```
is_disabled: check_realm_setting[setting]
```
where check_realm_setting[setting] is same as realm_setting_disabled.
and for notifiaction settings it is,
```
ret.is_disabled = check_realm_setting[setting] || sub.is_muted;
```
Profiles of typing in the Zulip webapp's compose box after opening the
stream creation widget showed that hotkey.processing_text was a
significant expense. There's no good reason for this -- the function
just needs to inspect the focused element; it just was written with a
sloppy selector.
While there's a secondary issue that, there's no good reason for this
extremely latency-sensitive code path (typing an additional character)
to be doing something extremely inefficient.
We now give "slight smile" precedence over
"small airplane" if you type "sm".
More generally, we favor popularity over prefix
matches for emoji matches, as long as the popular
emoji matches on any of its pieces.
I removed a slightly confusing code comment, which I
will address in a follow up commit. Basically,
"slight smile" still doesn't win over "small airplane"
when you search for "sm", which kind of defeats the
purpose of having popular_emojis for the typeahead
use case. This is a problem with sort_emojis, though,
so when the comment was next to the list of popular
emojis, it wasn't really actionable.
We are sweeping the codebase to use startsWith
when possible. I kept this on a separate
commit due to us vendoring the library, just
to reduce some noise.
Using startsWith is faster than indexOf, especially for long strings
and short prefixes. It's also a lot more readable. The only reason
we weren't using it was when a lot of the code was originally written,
it wasn't available.
We only convert the query to lowercase outside the
loop for an Nx speedup, where N = number of items.
And then we use startsWith instead of indexOf, which
means we don't senselessly search entire strings
for matches.
(We've had startsWith polyfills for a while now.)
Unfortunately, unless a string start with the
exact casing of the query, we still create an
entire lowercase copy of the string for the case
insensitive match. For the English use case
(and many other languages), we could further
optimize this by slicing the string before
converting it to lowercase.
Unfortunately, you have languages like German
with the straße/STRASSE problem. It's not clear
to me how we handle them with the current code,
but I don't want to break that yet.
We use the nice es6 syntax to create the get_item
helpers (in the callers and for the default
value in the function).
Also we use better es6 style for the looping.
This extracts get_emoji_matcher and all the
functions it depended on, most of which were
in composebox_typeahead.js.
We also move remove_diacritics out of the people
module.
This is the first major step for #13728.
Adding invited users to the notifications stream unconditionally isn't
a correct behaviour for guest users, where the previous behavior of
including the notifications stream no longer makes sense. Therefore,
while inviting a new user, the notifications stream is listed along
with other streams with a message "recieves notifications for new
streams" in order to distinguish it from other streams.
Fixes#13645.
We used to put the user's email in a value, which was
redundant (we could find the value from
our parent's label) and brittle (would break
on email changes).
Now the DOM's a bit slimmer and more robust.
Also note that we now deal with user_ids, not emails,
in the call stack until we hit the "edge" and convert
to emails for the server.
This fixes some harmless type errors from the
following commit:
6ec5a1f306
The IntDict code automatically converts strings to
integers, so this was not a user-facing problem, but
we want to have our callers do the conversions
explicitly.
This legacy cross-realm bot hasn't been used in several years, as far
as I know. If we wanted to re-introduce it, I'd want to implement it
as an embedded bot using those common APIs, rather than the totally
custom hacky code used for it that involves unnecessary queue workers
and similar details.
Fixes#13533.
When a user clicked the current emoji format in "display settings",
we'd show an infinite loading spinner (basically as a side effect of
trying to tell the server to change the emoji format to what it
already was).
Fix this by aborting early if the emoji format is already the option
that the user clicked.
Fixes#13684.
We now only go the server if both of these
conditions are true:
- our message data seems incomplete for
the stream
- we haven't already fetched history
This function will make more sense when we start
tracking api calls that retrieve topic history.
The unit tests here are kinda duplicating what we
have in the stream_data tests. If we move the
function out of stream_data, we can kill off the
tests there, but for now I think a bit of duplicate
testing is fine here.
All the callers seem to have integer stream_ids
already, either from the message object or
some sub object.
We also use clear() inside the test-only reset()
method.
Previously, links to deleted streams would be incorrectly rendered as
stream's name).
Fixes an issue that was reported where after deleting the "general"
stream, the welcome turtle messages might appear as links to
This is mostly for tactical reasons. It's hard to
get 100% test coverage on topic_list.js, but it
should be easy to get 100% test coverage on this
very important function.
I considered just moving this code into topic_data.js,
but it just didn't feel quite right. I feel like
this is a pretty core piece of code that's nice
to be by itself and not be near other complicated
code that does stuff like build widgets or talk
to servers. (And, again, it's not just the actual
code here, which is pretty small, it's the unit
tests, which are inherently verbose to exercise
all the edge cases.)
There was an edge case with the old
code when you had exactly between 6 and 8
topics and all in cache, with a couple of
the topics being unread.
We would show "more topics" when you were
actually seeing all your possible topics.
To test this:
- create 7 topics on Venice
- as Iago, narrow to any of the Venice
topics
- as Aaron, send unreads to 3 or 4
of the other topics
Eventually Iago will have all possible
topics in the sidebar. On master we'll
show "more topics", whereas after this commit
we correctly avoid that.
It's a pretty harmless bug, since it just
leads to a useless zoom-in.
I have always felt we should zoom-in
regardless of how many topics you have,
just for consistency sake, but I also
understand the rationale behind our
current intentions.
This is basically trying to confine the
rendering logic to a smaller function,
since I want to work toward a better
approach for redrawing the topic list.
Also, since the new function is now
purely data-oriented, it will be a
bit easier to test various edge cases.
If you clicked for no more topics and then the server didn't find any,
we once had code that would say "No more topics" in light gray at the
bottom of the topic list.
The feature appears to have been broken by some detail in the
`self.dom` refactoring. More importantly, it's not clear it's useful
as opposed to clutter.
Since we added the `stream.first_message_id` feature, it's now very
rare for the `more topics` option to appear when there aren't in fact
older topics that could be fetched. In cases where there are not, the
UI is still clear about what's happening -- it shows a loading
indicator and then displays a list of topics that doesn't have
anything new.
So we're removing this feature; we can re-add it without too much
difficulty if user feedback in the future suggests it would be useful
after all.
The only place we ever set active-sub-filter is
right after we build the template, so there is
no reason to have it be a separate step.
(I made a similar fix to pm_list recently, and
this helps set the stage for doing vdom-like
stuff.)
The previous logic was a bit byzantine, making a lot of inferences
based on which conditionals had already been processed that made it
hard to read. This simple function approach promises to be more
readable.
This is for consistency with how we show unreads in muted topics at
the stream level, avoiding distracting users with the appearance of
unread messages in muted topics that they've made clear they are not
interested in.
Arguably, we should show a faded count if there are unreads on muted
topics (but none on unmuted topics), but that seems somewhat complex
to maintain, and we'd benefit from user feedback to make an effective
decision on whether it'd be an improvement.
Fixes#13676.
I think this probably matches users' expected behavior that muted
streams shouldn't get in their way unless the user is actively looking
for them. If a user has a lot of muted topics with active traffic
(e.g. because topics corresponding to channels in a mirrored Slack
instance), they would previously find their 5 slots cluttered with
those muted topics even if there were unmuted topics with unread
messages.
Fixes#13677.
We may revisit this in the future, but similar to is:private, the
current Zulip user experience makes users expect that in the
is:mentioned view, they should really be able to mark messages as
read.
Further, the practice use case for not marking them as read is very
low, since it's rare for someone to have so many mentions that
revisiting the mentions view isn't sufficient to see everything that
needs their attention.
Previously, is_exactly() had already been repalced with can_bucket_by().
This commit removes is_exactly() and replaces its usage in our tests
with can_bucket_by().
For Manage Streams, when we render the subscriptions
template, a significant amount of time is taken
by the "t" helper.
Obviously for the first call, we expect "t" to be
somewhat expensive, but subsuquent calls should be
fast, but i18next seems to have some overhead.
Also, we can save a tiny bit of overhead (marking it
as a safe string) that comes from our helper.
As an aside, are we sure it's ok to mark translations
as safe strings?
To test before and after, use blueslip.timings before
and after this commit. When I tested with about 300
streams, the difference is pretty striking:
without cache: 100ms
with cache: 20ms
This is particularly interesting, since the subscriptions
templates have long strings for things like the SVG-based
checkmarks, but they're not really the bottleneck.
Unfortunately, this doesn't seem to be a huge win
elsewhere. In some places we don't call "t", but of
course those might change in the future and benefit from
the cache. And in other places we have smart widgets
that avoid rendering all N objects at one (e.g. buddy
list and list_render).
So this might be too big a hammer to speed up one
screen (albeit a really slow one). It's possible
that we should simply move the i18n.t step **outside**
of certain templates to avoid doing them in a loop.
We now incorporate people.get_message_people() in our
logic for compose/PM typeaheads. This not only gives
users better results in some cases, but it will also
improve performance for large realms in some cases.
We'll use this in two places coming up, so it's
worth extracting, plus I wanted to add the
fairly lengthy comment here. (Tim, feel free
to edit down the comment as you see fit).
This is relatively unobtrusive, and we don't send
anything to the server.
But any user can now enter blueslip.timings in the
console to see a map of how long things take in
milliseconds. We only record one timing per
event label (i.e. the most recent).
It's pretty easy to test this by just clicking
around. For 300 users/streams most things are
fast except for:
- initialize_everything
- manage streams (render_subscriptions)
Both do lots of nontrivial work, although
"manage streams" is a bit surprising, since
we're only measuring how long to build the
HTML from the templates (whereas the real
time is probably browser rendering costs).
This change sets us up to optimize how we
filter users in the admin user settings.
See #13554 for more context on the user
facing issues.
This fix is basically three related things:
- Add filterer options to list_render.
- Add helper method to people.js.
- Use filterer in settings_users.js.
The filter "callback" was only a "callback" in the
most general sense of the word.
It's just a filter predicate that returns a bool.
This is to prepare for another filtering option,
where the caller can filter the whole list
themselves. I haven't figured out what I will name
the new option yet, but I know I want to make the
two options have specific names.
We are already providing callbacks everywhere, so
it would be nice to eliminate some dead code.
This also speeds things up ever so slightly (no
longer type-checking the option every time through
the loop).
We also split out exports.filter to make unit testing
easier. The function seems kinda silly now, being so
small, but I hope to add another filtering option soon.
It's a bit confusing when you read this code to know
where the original list was created. I'm not a huge
fan of the cache scheme here, but it does seem to
work for live updates.
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Currently, if we change stream we see the immediate saving of stream, but
it is more convenient to have "Save" and "Discard" buttons as we use
everywhere else in the organization setting subsystem.
This is a preliminary commit for further commits where we will be using the
newly created function `save_discard_widget_status_handler` in click
handler for changing the notification stream.
This refactors `discard_property_element_changes` and
`check_property_changed` function to move conditional statements of
properties that need to be handled separately. It's a preliminary commit in
the series of using save/discard widget for notification stream setting.
As the part of making notification stream settings to change using
"save/discard" widget instead of immediate saving, we need to access the
stream id which is being selected at the moment.
(This is another preliminary commit in the direction of having
"save/discard" widget show up rather than saving immediately.)
The code for selecting and processing the stream for both types of
notifications is almost the same, so de-duplicated.
For "New stream notifications" and "New user notifications" it is more
intuitive to just use the new system for showing success/saving status
feedback.
This fixes the error where we pass `user_id` of 'string' type as the
argument instead of 'integer' to `exports.get_person_from_user_id` which
further passes `user_id` to InDict.has() function which accepts integer
argument only.
Extracting the function makes it a bit easier to
test and use in a generic way.
Also, I wanted this to live in stream_data, so that
it's easier to find if we change how we model
subscriber data.
Finally, I use _.every to do the subset check
instead of `_.difference`, since _.difference
is actually N-squared:
_.difference = restArguments(function(array, rest) {
rest = flatten(rest, true, true);
return _.filter(array, function(value){
return !_.contains(rest, value);
});
});
And we don't actually want to build a list only
to check that it's zero vs. nonzero length.
We now do this, which short circuits as soon
as it finds any key that is only in sub1:
return _.every(sub1.subscribers.keys(), (key) => {
return sub2_set.has(key);
});
First, there are no more convoluted signals.
We also simplify the parameter to just the "mentioned"
object corresponding to either a user or a broadcast
mention.
For the user group scenario, this has always been dead
code, which you only realized when you got to the comment
at the bottom. Now we actually do nothing.
And I moved the relevant commment to the
the typeahead code (with new wording).
I also moved the is_silent check to the caller. I don't
feel too strongly about that either way. It's kind of silly
to call a function only to give that function an additional
responsibility to worry about. On the other hand, I see
the logic of that function enforcing everything. I went
with the former for now.
Arguably we should have a warning for silent mentions,
since doing a silent mention of somebody not on a stream
is a good indication of a typo. I do understand the use
case, but the user can always ignore the warning. Anyway,
we have decent test coverage on this.
This isn't really an extraction; it's more giving
a name to an anonymous function and moving it to
higher module scope.
We convert this to an ordinary function call, which
allows us to move it out of intialize().
Since there's just one simple parameter now (linked_stream),
we can avoid some error checking.
We also avoid the comment that describes the function,
since it now has a name.
And then one minor tweak is to do the inexpensive
`invite_only` higher in the function. This will be
a nice speedup when you link to really large public
streams.
The unit tests are also a bit easier to read now--less
setup and more explicit names.
This is easier to read and faster, because it
avoids some unnecessary encoding on the pm-with
part, plus just a lot of extra logic that amounts
to just appending the slug.
Performance for this function is relevant because it is used
for every user every time we rerender the right sidebar.
Previously, if you tried to invite a user whose account had been
deactivated, we didn't provide a clear path forward for reactivating
the users, which was confusing.
We fix this by plumbing through to the frontend the information that
there is an existing user account with that email address in this
organization, but that it's deactivated. For administrators, we
provide a link for how to reactivate the user.
Fixes#8144.
This experimental setting disables sending private messages in Zulip
in a crude way (i.e. users get an error when they try to send one).
It makes no effort to adjust the UI to avoid advertising the idea of
sending private messages.
Fixes#6617.
The sort_recipients helper is used for many different
typeaheads, such as compose PMs, compose mentions,
and some settings-related code.
We now avoid unnecessary sorting steps in cases
where we have plenty of results in the top buckets
(such as users who match on prefix).
This change should not have any user-facing
implications.
This method is a bit complex, but I think it's
worthwhile to force PM autocompletes and mention
autocompletes through the same code path.
We also kill off this method:
typeahead_helper.sort_people_and_user_groups
This fixes some regressions from a recent
commit that might not have been deployed.
9f7be51ce8
Even if that change had been deployed, it
should not have been user-facing, but it
would have spammed us with blueslip errors
every time somebody used the stream/topic
popovers in the left sidebar.
There's no reason any more to have a single function
filter both persons and groups. Instead, we just
filter each cohort with a more direct function.
This is a minor performance speedup (avoiding the
conditional in the loop), but I mostly wanted to
simplify the code.
The sort_people_and_user_groups function's only
value-add over sort_recipients is to split out
groups and users, but the caller already had
them split out, so it was kinda silly to concatenate
them back together.
I doubt this was a dumb decision at the time; I think
it was probably a consequence of how bootstrap's
normal approach is kinda inflexible when you're
using typeahead to pull data from multiple sources.
I wanted to kill off sort_people_and_user_groups
completely, but the mention/silent_mention autocompletes
are still a bit awkward to refactor.
For historical reasons pm_list was handling just
one possible edge case of where is:private was
combined with other search terms, namely the
pm-with operator.
The code was correct in realizing the is:private
was redundant there, but now we handle that
upstream in Filter.fix_operators (see previous
commit).
Now we just look for any is:private term.
This change makes these two functions more alike:
- get_search_result
- get_search_result_legacy
To test the UI modify zerver/views/home.py by
replacing `settings.SEARCH_PILLS_ENABLED` with
`True`. I only did a quick sanity check, since
any bugs with the new system are more likely due
to bitrot than any changes I have made here.
The history is this:
Tim cloned the code (before the smaller
helpers were extracted):
db4f6e278f
In 8b153f6452
Shubham removed get_operator_subset_suggestions but
accidentally left a `concat` statement in that got
misapplied to the previous suggestions:
- suggestions = get_operator_subset_suggestions(operators);
result = result.concat(suggestions);
The error there was carried over in some recent changes,
but this commit fixes that strangeness.
In 73e4f3b3fa
Shubham made this change, which makes sense only for
pills, and this code remains intact.
- if (operators.length > 0) {
- last = operators.slice(-1)[0];
+ if (query_operators.length > 0) {
+ last = query_operators.slice(-1)[0];
+ } else {
+ // If query_operators = [] then last will remain
+ // {operator: '', operand: '', negated: false}; from above.
+ // `last` has not yet been added to operators/query_operators.
+ // The code below adds last to operators/query_operators
+ operators.push(last);
+ query_operators.push(last);
}
Mohit made a couple changes to both old and new.
Anders made a couple non-substantive changes related to
the ES6 migration.
Steve (me) made several structural changes to the code. For
some of them I only changed the legacy code, not the pills
code. I didn't fix Shubham's mistake until this change.
Now the two functions should look similar except in the places
where they are intentionally different. I also added a comment
explaining the get_operator_subset_suggestions difference.
Fixes#13609
There may be a deeper issue that various JavaScript logic expects
every message to have a `.message_content` element, but we definitely
should have the `.rendered_markdown` class on any markdown content.
Fixes#13634.
The reason we use functions here will be clear
in the next change.
This is a "prefactor" commit that doesn't change
any user-facing behavior nor significantly change
performance.
A few reasons to extract it:
- we can shorten lines (and not repeat query
every time)
- we can scope the big block comment explaining
why util.prefix_sort is a strange name
- the name is better (it's an O(N) operation that
mostly partitions)
- we may want to swap it out with a true partition
function that's truly a partition (since the
case checks done by prefix_sort are possibly
either a non-feature or mostly overridden by
the other sorts)
The recip.id || recip.user_id idiom has only been
needed for some old unit tests.
It was previously required as a bad workaround for the
local echo issue fixed in dd1a6a97bd
where we would get `display_recipient` values added in an invalid format.
I want to be able to easily test this without
having to simulate all the jQuery side effects.
This simply preserves the old logic, which seems
to handle one edge case without handling every
possible edge case. The edge cases aren't super
important here, though, since the only thing it affects
is bolding "Private Messages", and when to do that
is somewhat up to personal tastes.
Having said that, we could definitely improve
this code and possibly should move some of this
logic to either narrow_state.js or filter.js.
Instead of doing various ad-hoc calculations of
which PM is "active" and plumbing it through various
functions and then updating it via jQuery instead of
just the template, we now just calculate `is_active`
in `_build_private_messages_list` with a little
helper function.
In 3cfc3ca24b I removed
the feature that limited PM conversations to five or
less (including the active conversation), but I
didn't clean up this parameter. I think lint was
confused by the fact that we did mutate it.
I am wondering if this started out as an experiment
and was never fully polished before the push? Or
maybe I was just careless. Anyway, I don't
think were any symptoms here--it was just dead code
that we didn't need.
This fixes a rebase issue between the int_dict introduction and use
for people.js with the introduce of filter_values on dict.js and use
inside people.js.
Note that we haven't fully swept this for Dict,
since some dicts are keyed by strings. For
example PM counts can have a huddle like
"101,102,103" as a key.
This should be slightly more performant, and we
often call this function N times, such as when
rendering the buddy list.
There's a minor change to pm_list to avoid
an unnecessary computation on huddles that would
otherwise trigger a blueslip warning for the
huddles case.
Once we get past the special check for fake
person objects already having `pm_recipient_count`,
we can rely on the object being a `person`
object with `user_id` set.
When we are pulling data from message.display_recipient
for private messages, the user_id field is always
called 'id', not 'user_id', so we can simplify
some defensive code.
This required lots of manual testing:
- search/navigate user presence
- send PM and mention user
- pay attention to compose fade
- send stream msg and mention user
- open Private Messages in top-left and click
- test unread counts
- invite user who already has account
- search for users in search bar
- check user settings
- User Groups
- Users
- Deactivated Users
- Bots
- create a bot
- mention user groups
- send group PM then click on lower right
- view/edit/create streams
If there are still pieces of code that don't convert
ids to ints, the code should still work but report
blueslip errors.
I try to mostly convert user_ids to ints in the callers,
since often the callers are dealing with small amounts
of data, like user ids from huddles.
We only ever show 3 or 4 people in search suggestions
(possibly w/a couple variations, like pm-with/sender/etc.),
so we can try to search a smaller subset of people
before going through the entire realm.
We use message_store.user_ids() for this, since you
typically want to search messages for people that
have sent messages recently, and we already sort
based on PM conversations.
This should avoid some memory allocations.
We also use build_person_matcher to avoid
repeating the same logic over and over
again to process the query into termlets.
We also remove people.get_all_persons() and
people.person_matches_query().
This may actually be a slowdown for the worst case
scenario, but it sets us up to be able to easily
short circuit the removal of diacritic characters
for users that have pure ascii names.
For example, czo has lots of names like this:
- Tim Abbott
- Steve Howell
Since they're pure ascii, we can do a one-time
check. A subsequent commit will show how we use
this.
This looks like simple code cleanup, but it's more
than that.
The code cleanup here is that we don't have three
callbacks to get a list of typeaheads for bootstrap.
Instead, we just have one function that does all the
main work.
And then the speedup comes from the fact we no longer
need to remove diacritics from the query for every
time through our loop of seeing if a person matches
the query.
It's a bit subtle to see in the diff, but these are
the relevant lines:
const matcher = exports.get_person_or_user_group_matcher(query);
const filtered_results = _.filter(people_and_groups, matcher);
Before this, bootstrap was doing $.grep, and we'd have
to reinitialize the matcher for every person.
If you profile this before and after, you'll see that
remove_diacritics gets called fewer times.
To profile this, you want to loads lots of users into
your DB and try to autocomplete "Extra", as in "Extra1 User".
If you try to autocomplete something else, then my patch
won't really help, and `remove_diacritics` will still
show up as expensive. Because it is that expensive a function.
These had to be done in tandem, since they were
both kinda coupled to the function that is now
called query_matches_name_description.
(This commit slightly negatively impacts PM
lookups, but this is addressed in the subsequent
commit, which makes PMs much faster. The impact
is super minimal--it's just an extra function
dispatch.)
This may seem silly now, since we are returning a function
that still dispatches over all flavors of search for
every item, but subsequent commits will make it obvious
why I'm doing this.
We want to do our own matching of items, rather than
just giving a callback to bootstrap, which does $.grep
on all the items.
Doing our own matching gives us flexibility for future
improvements like custom data structures for searching
through big amounts of data. Even in the short term
we can speed up searches by pulling expensive operations
outside the grep/filter call.
This architecture has been in place for our search
bar since ~2014.
We have ~5 years of proof that we'll probably never
extend Dict with more options.
Breaking the classes into makes both a little faster
(no options to check), and we remove some options
in FoldDict that are never used (from/from_array).
A possible next step is to fine-tune the Dict to use
Map internally.
Note that the TypeScript types for FoldDict are now
more specific (requiring string keys). Of course,
this isn't really enforced until we convert other
modules to TS.
We had a potentially nasty bug where we
weren't guaranteeing that all/stream/everyone
collated in consistent ways inside of
`compare_people_for_relevance`, which can
send certain types of sort algorithms into
an infinite loop. I doubt this ever happened
in practice, but it's obviously worth fixing.
Now we also have a clear tiebreaker between
any two all/everyone/stream mentions, which
is the idx field.
Finally, this should be a bit more efficient.
This name was misleading, since this code is used
in sort_recipients, which happens when you, for
example, autocomplete persons in the "To:" box
when composing (and has nothing to do with
mentioning).
This makes it a bit easier to find common patterns,
plus it sets us up to pull the calls even further
up the stack.
The first rule of dealing with user data is sanitize
at the edges, not deep down in some function that
has many callers. Putting this code so deep down
in the stack means it's more likely to be called in
a loop.
This moves clean_query into all the callers
of query_matches_source_attrs.
This doesn't change anything performance-wise,
but it sets up future commits.
This change is easy--we only had one caller.
This change means any query going against a
target with multiple `match_attrs`, such as
user names (first name, last) only has to
clean the query once per person.
We want to mostly deprecate this function (see
the comment I added), so I gave it a more specific
name.
Ideally I'd just fix `stream_create`, but it does
use this function in a couple places, and it's helpful
to reuse the same sort here. In one place stream_create
actually unshifts the "me" user back to the top of the
list, which makes sense for its use case.
If two user_ids in a recent huddle have ids
that sort lexically differently than numerically,
such as 7 and 66, then we were creating two
different buckets in pm_conversations.
This regression was introduced in
263ac0eb45 on
November 21, 2019.
Instead of having our callers pass in a possibly
non-canonical version of a user_ids_string, just
have them pass in a list.
The next commit will canonicalize the sort.
The only thing get_color() does is look
up a sub:
exports.get_color = function (stream_name) {
const sub = exports.get_sub(stream_name);
if (sub === undefined) {
return stream_color.default_color;
}
return sub.color;
};
So if we have a sub already, there's no point
calling the helper.
Obviously, this isn't a huge deal, but it happens
N times during page load.
This should make any operation on subscribed
streams faster (we won't need to filter out
unsubscribed streams every time).
I started writing this before I realized we
had a bug where we call `subscribed_streams`
in a nested loop.
After fixing the bugs, this is not as much of
a bottleneck, but it's still a speedup in many
important places:
* build left sidebar
* every keystroke in search bar
* first keystroke in making #stream_links
* every keystroke in compose stream box
The streams settings code is kinda complicated.
It does a non-deterministic sort of the "others"
bucket when you add elements to the left panel.
They get hidden, anyway. Our values() call now
puts subscribed streams first. It never guaranteed
order, but putting subscribed streams first is
probably a good behavior for most situations.
This defers O(N*S) operations, where
N = number of streams
S = number of subscribers per stream
In many cases we never do an O(N) operation on
a stream. Exceptions include:
- checking stream links from the compose box
- editing a stream
- adding members to a newly added stream
An operation that used to be O(N)--computing
the number of subscribers--is now O(1), and we
don't even pay O(N) on a one-time basis to
compute it (not counting the cost to build the
array from JSON, but we have to do that).
Calling `set_filter_out_inactives` is expensive, since we
count up the number of subscribed streams, which iterates
through all your streams, creates a new list of subscribed
streams, then counts them.
In my dev setup, I created 700 streams, and this shaved
about 700ms off of the initial call to `build_stream_list`.
If we aren't showing users emails, then we don't
want to use emails in the search.
And if we are showing users emails, we want to
search on the email that's displayed to them.
For admins this will be delivery_email.
For regular users we arguably shouldn't search
on emails either, since it mostly causes confusion,
but this commit just preserves the current
behavior for those users (unless `show_email` is
false).
We want to be able to unit test this value,
since it's conditional on several factors:
- am I an admin?
- can non-admins view emails?
- do we have delivery_email for the user?
I'm mocking show_email in the tests, since the
show_email code is in `settings_org` and
kind of hard to unit test. It's not impossible,
but it's too much for this commit. (Either
we need to extract it out to a nice file or
deal with mocking jQuery. That module is
mostly data-oriented, so it would be nice
to have something like `settings_config` that
is actually pure data.)
This was duplicate code. I'm moving it to people
for pragmatic reasons--it's hard to unit test stuff
in settings_users.js due to all the jQuery.
It's also nice to have all people-related search
code in one place, just for auditing purposes.
It appears c28c3015 caused a regression where we
set `email` to undefined if a user does not have
`delivery_email` set, and this causes filtering
of users to fail for admins doing user settings.
This fixes only one of the issues reported in
issue #13554.
There's probably no easy fix to scrolling taking
long, but I think fixing search will mostly
address that complaint.
The Rust folks seem to agree with me that the
search results are too noisy. If I search for
"s" I get:
* names like Steve (good)
* names like Jesse (noisy)
* anybody with s in their email (super noisy)
Here is the relevant code:
return (
item.full_name.toLowerCase().indexOf(value) >= 0 ||
email.toLowerCase().indexOf(value) >= 0
);
We now can call is_ascii only once per search termlet
when we are filtering multiple persons on the same
query. (This requires the caller to use
`build_person_matcher` outside a loop or before
a `_.filter` call.)
This is not a major speedup, but we do a couple
simple things here:
- trim the query outside the function we
build (that might be called multiple times)
- don't split names before we possibly
early-exit with an email match