This moves clean_query into all the callers
of query_matches_source_attrs.
This doesn't change anything performance-wise,
but it sets up future commits.
This change is easy--we only had one caller.
This change means any query going against a
target with multiple `match_attrs`, such as
user names (first name, last) only has to
clean the query once per person.
We want to mostly deprecate this function (see
the comment I added), so I gave it a more specific
name.
Ideally I'd just fix `stream_create`, but it does
use this function in a couple places, and it's helpful
to reuse the same sort here. In one place stream_create
actually unshifts the "me" user back to the top of the
list, which makes sense for its use case.
This change should prevent test flakes, plus
it's more deterministic behavior for clients,
who will generally comma-join the ids into
a key for their internal data structures.
I was able to verify test coverage on this
by making the sort reversed, which would
cause test_huddle_send_message_events to
fail.
If two user_ids in a recent huddle have ids
that sort lexically differently than numerically,
such as 7 and 66, then we were creating two
different buckets in pm_conversations.
This regression was introduced in
263ac0eb45 on
November 21, 2019.
Instead of having our callers pass in a possibly
non-canonical version of a user_ids_string, just
have them pass in a list.
The next commit will canonicalize the sort.
The server may send us ids in the order
[11, 2], instead of [2, 11]. We don't want
to rely on server behavior, regardless, for
the sort.
Our tests now show we process that data.
The current code is is still buggy and causes
us to show the same huddle two different times
for situations where the lexical sort doesn't
match the numerical sort.
This happens on czo often, where Tim is user
7, and his id sorts lexically after ids like
58, 622, 4444, etc.
This should be about 4 times faster, saving something like half a
millisecond on each stream of 10000 subscribers.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The only thing get_color() does is look
up a sub:
exports.get_color = function (stream_name) {
const sub = exports.get_sub(stream_name);
if (sub === undefined) {
return stream_color.default_color;
}
return sub.color;
};
So if we have a sub already, there's no point
calling the helper.
Obviously, this isn't a huge deal, but it happens
N times during page load.
This should make any operation on subscribed
streams faster (we won't need to filter out
unsubscribed streams every time).
I started writing this before I realized we
had a bug where we call `subscribed_streams`
in a nested loop.
After fixing the bugs, this is not as much of
a bottleneck, but it's still a speedup in many
important places:
* build left sidebar
* every keystroke in search bar
* first keystroke in making #stream_links
* every keystroke in compose stream box
The streams settings code is kinda complicated.
It does a non-deterministic sort of the "others"
bucket when you add elements to the left panel.
They get hidden, anyway. Our values() call now
puts subscribed streams first. It never guaranteed
order, but putting subscribed streams first is
probably a good behavior for most situations.
This defers O(N*S) operations, where
N = number of streams
S = number of subscribers per stream
In many cases we never do an O(N) operation on
a stream. Exceptions include:
- checking stream links from the compose box
- editing a stream
- adding members to a newly added stream
An operation that used to be O(N)--computing
the number of subscribers--is now O(1), and we
don't even pay O(N) on a one-time basis to
compute it (not counting the cost to build the
array from JSON, but we have to do that).
Calling `set_filter_out_inactives` is expensive, since we
count up the number of subscribed streams, which iterates
through all your streams, creates a new list of subscribed
streams, then counts them.
In my dev setup, I created 700 streams, and this shaved
about 700ms off of the initial call to `build_stream_list`.
If we aren't showing users emails, then we don't
want to use emails in the search.
And if we are showing users emails, we want to
search on the email that's displayed to them.
For admins this will be delivery_email.
For regular users we arguably shouldn't search
on emails either, since it mostly causes confusion,
but this commit just preserves the current
behavior for those users (unless `show_email` is
false).
We want to be able to unit test this value,
since it's conditional on several factors:
- am I an admin?
- can non-admins view emails?
- do we have delivery_email for the user?
I'm mocking show_email in the tests, since the
show_email code is in `settings_org` and
kind of hard to unit test. It's not impossible,
but it's too much for this commit. (Either
we need to extract it out to a nice file or
deal with mocking jQuery. That module is
mostly data-oriented, so it would be nice
to have something like `settings_config` that
is actually pure data.)
This was duplicate code. I'm moving it to people
for pragmatic reasons--it's hard to unit test stuff
in settings_users.js due to all the jQuery.
It's also nice to have all people-related search
code in one place, just for auditing purposes.
It appears c28c3015 caused a regression where we
set `email` to undefined if a user does not have
`delivery_email` set, and this causes filtering
of users to fail for admins doing user settings.
This fixes only one of the issues reported in
issue #13554.
There's probably no easy fix to scrolling taking
long, but I think fixing search will mostly
address that complaint.
The Rust folks seem to agree with me that the
search results are too noisy. If I search for
"s" I get:
* names like Steve (good)
* names like Jesse (noisy)
* anybody with s in their email (super noisy)
Here is the relevant code:
return (
item.full_name.toLowerCase().indexOf(value) >= 0 ||
email.toLowerCase().indexOf(value) >= 0
);
We now can call is_ascii only once per search termlet
when we are filtering multiple persons on the same
query. (This requires the caller to use
`build_person_matcher` outside a loop or before
a `_.filter` call.)
This is not a major speedup, but we do a couple
simple things here:
- trim the query outside the function we
build (that might be called multiple times)
- don't split names before we possibly
early-exit with an email match
This will allow use to change some O(N) behavior
to O(1) where we are performing the same query
on a bunch of people. (Subsequent commits will
actually take advantage of this prefactoring.)
Once we have max_items results, stop trying
to get more items.
This should really help large realms when
you do a search on streams that turns up
more than N streams (where N is about 12).
We won't even bother to find people.
This class gives us more control over attaching
suggestions to our eventual result. The main
thing we do now is remove duplicates as they're
encountered.
This will make sense in the follow up commit,
where we can short circuit actions as soon as
we get enough results.
This has a few benefits:
- we remove some duplicate code
- we can see finalize_results in profiles
It turns out finalize_results is expensive
for some searches. If the search itself doesn't
do a ton of work but returns a lot of results,
we see it in finalize_results. It brings to
attention that we should be truncating items
earlier instead of doing lots of unnecessary
work.
This isn't a huge speedup, but it's an easy
code change.
We remove the two-liner highlight_with_escaping,
which was only called in one place, and when
we inline it into the caller, we can pull the
first line, which builds the regex, out of the
loop.
The code we removed in highlight_with_escaping
is exactly the same code as in
highlight_with_escaping_and_regex.
I actually copy/pasted this code five years
ago and am now removing the duplication. :)
When we're highlighting all the people that show
up in a search from the search bar, we need
to fairly expensively build a regex from the
query:
query = query.toLowerCase();
query = query.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g, '\\$&');
const regex = new RegExp('(^' + query + ')', 'ig');
Even though the final regex is presumably cached, we
still needed to do that `query.replace` for every person.
Even for relatively small numbers of persons, this would
show up in profiles as expensive.
Now we just build the query once by using a pattern
where you call a function outside the loop to build
an inner function that's used in the loop that closes
on the `query` above. The diff probably shows this
better than I explained it here.
Extracting this calculation makes it easier to hack
it when you're trying to load lots of users.
We probably want a slightly more realistic calculation
here for stress testing. And also fewer rows. But
at least now it's a little more clear what it's doing.
QueueProcessingWorker and LoopQueueProcessingWorker are abstract classes
meant to be subclassed by a class that will define its own consume()
or consume_batch() method. ABCs are suited for that and we can tag
consume/consume_batch with the @abstractmethod wrapper which will
prevent subclasses that don't define these methods properly to be
impossible to even instantiate (as opposed to only crashing once
consume() is called). It's also nicely detected by mypy, which will
throw errors such as this on invalid use:
error: Only concrete class can be given where "Type[TestWorker]" is
expected
error: Cannot instantiate abstract class 'TestWorker' with abstract
attribute 'consume'
Due to it being detected by mypy, we can remove the test
test_worker_noconsume which just tested the old version of this -
raising an exception when the unimplemented consume() gets called. Now
it can be handled already on the linter level.
LoopQueueProcessingWorker can handle exceptions inside consume_batch in
a similar manner to how QueueProcessingWorker handles exceptions inside
consume.
Our ldap integration is quite sensitive to misconfigurations, so more
logging is better than less to help debug those issues.
Despite the following docstring on ZulipLDAPException:
"Since this inherits from _LDAPUser.AuthenticationFailed, these will
be caught and logged at debug level inside django-auth-ldap's
authenticate()"
We weren't actually logging anything, because debug level messages were
ignored due to our general logging settings. It is however desirable to
log these errors, as they can prove useful in debugging configuration
problems. The django_auth_ldap logger can get fairly spammy on debug
level, so we delegate ldap logging to a separate file
/var/log/zulip/ldap.log to avoid spamming server.log too much.
Fixes this error after rebooting the host:
$ sudo ./destroy-all -f
zulip-install-bionic-41MM2
lxc-stop: zulip-install-bionic-41MM2: tools/lxc_stop.c: main: 191 zulip-install-bionic-41MM2 is not running
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>