This test mostly tests logic that I'm about
to remove in subsequent commits, and it's a bit
messy.
This commit removes 100% line coverage, but I
will restore that a few commits later.
In 3cfc3ca24b I removed
the feature that limited PM conversations to five or
less (including the active conversation), but I
didn't clean up this parameter. I think lint was
confused by the fact that we did mutate it.
I am wondering if this started out as an experiment
and was never fully polished before the push? Or
maybe I was just careless. Anyway, I don't
think were any symptoms here--it was just dead code
that we didn't need.
This fixes a rebase issue between the int_dict introduction and use
for people.js with the introduce of filter_values on dict.js and use
inside people.js.
Note that we haven't fully swept this for Dict,
since some dicts are keyed by strings. For
example PM counts can have a huddle like
"101,102,103" as a key.
This should be slightly more performant, and we
often call this function N times, such as when
rendering the buddy list.
There's a minor change to pm_list to avoid
an unnecessary computation on huddles that would
otherwise trigger a blueslip warning for the
huddles case.
Once we get past the special check for fake
person objects already having `pm_recipient_count`,
we can rely on the object being a `person`
object with `user_id` set.
When we are pulling data from message.display_recipient
for private messages, the user_id field is always
called 'id', not 'user_id', so we can simplify
some defensive code.
This required lots of manual testing:
- search/navigate user presence
- send PM and mention user
- pay attention to compose fade
- send stream msg and mention user
- open Private Messages in top-left and click
- test unread counts
- invite user who already has account
- search for users in search bar
- check user settings
- User Groups
- Users
- Deactivated Users
- Bots
- create a bot
- mention user groups
- send group PM then click on lower right
- view/edit/create streams
If there are still pieces of code that don't convert
ids to ints, the code should still work but report
blueslip errors.
I try to mostly convert user_ids to ints in the callers,
since often the callers are dealing with small amounts
of data, like user ids from huddles.
We only ever show 3 or 4 people in search suggestions
(possibly w/a couple variations, like pm-with/sender/etc.),
so we can try to search a smaller subset of people
before going through the entire realm.
We use message_store.user_ids() for this, since you
typically want to search messages for people that
have sent messages recently, and we already sort
based on PM conversations.
This should avoid some memory allocations.
We also use build_person_matcher to avoid
repeating the same logic over and over
again to process the query into termlets.
We also remove people.get_all_persons() and
people.person_matches_query().
This may actually be a slowdown for the worst case
scenario, but it sets us up to be able to easily
short circuit the removal of diacritic characters
for users that have pure ascii names.
For example, czo has lots of names like this:
- Tim Abbott
- Steve Howell
Since they're pure ascii, we can do a one-time
check. A subsequent commit will show how we use
this.
This looks like simple code cleanup, but it's more
than that.
The code cleanup here is that we don't have three
callbacks to get a list of typeaheads for bootstrap.
Instead, we just have one function that does all the
main work.
And then the speedup comes from the fact we no longer
need to remove diacritics from the query for every
time through our loop of seeing if a person matches
the query.
It's a bit subtle to see in the diff, but these are
the relevant lines:
const matcher = exports.get_person_or_user_group_matcher(query);
const filtered_results = _.filter(people_and_groups, matcher);
Before this, bootstrap was doing $.grep, and we'd have
to reinitialize the matcher for every person.
If you profile this before and after, you'll see that
remove_diacritics gets called fewer times.
To profile this, you want to loads lots of users into
your DB and try to autocomplete "Extra", as in "Extra1 User".
If you try to autocomplete something else, then my patch
won't really help, and `remove_diacritics` will still
show up as expensive. Because it is that expensive a function.
These had to be done in tandem, since they were
both kinda coupled to the function that is now
called query_matches_name_description.
(This commit slightly negatively impacts PM
lookups, but this is addressed in the subsequent
commit, which makes PMs much faster. The impact
is super minimal--it's just an extra function
dispatch.)
This may seem silly now, since we are returning a function
that still dispatches over all flavors of search for
every item, but subsequent commits will make it obvious
why I'm doing this.
We want to do our own matching of items, rather than
just giving a callback to bootstrap, which does $.grep
on all the items.
Doing our own matching gives us flexibility for future
improvements like custom data structures for searching
through big amounts of data. Even in the short term
we can speed up searches by pulling expensive operations
outside the grep/filter call.
This architecture has been in place for our search
bar since ~2014.
The benchmark is commented out. It takes only a few
milliseconds to run, so there may be no reason not
to always run it. It doesn't test correctness, so
it would arguably inflate line coverage, but set/get
are obviously covered elsewhere.
We now require the actual tests to explicitly
to zrequire Dict, rather than magically adding this.
In one case, the use of Dict was clearly just for
the test (not the app), so I converted that an ordinary
JS object (see timerender.js).
We have ~5 years of proof that we'll probably never
extend Dict with more options.
Breaking the classes into makes both a little faster
(no options to check), and we remove some options
in FoldDict that are never used (from/from_array).
A possible next step is to fine-tune the Dict to use
Map internally.
Note that the TypeScript types for FoldDict are now
more specific (requiring string keys). Of course,
this isn't really enforced until we convert other
modules to TS.
Model classes fetched through apps.get_model don't get methods or class
attributes. It's not feasible to add them to all these objects in
use_db_models, but Recipient.PERSONAL etc. are worth setting, since
doing that increases the range of functions that can successfully be
imported and called in test_migrations.py.
These tests had a lot of very repetetive, identical mocking, in some
tests without even doing anything with the mocks. It's cleaner to put
the mock in the one relevant, common place for all the tests that need
it, and remove it from tests who had no use for the mocking.
Fixes#13504.
This commit is purely an improvement in error handling.
We used to not do any validation on keys before passing them to
memcached, which meant for invalid keys, memcached's own key
validation would throw an exception. Unfortunately, the resulting
error messages are super hard to read; the traceback structure doesn't
even show where the call into memcached happened.
In this commit we add validation to all the basic cache_* functions, and
appropriate handling in their callers.
We also add a lot of tests for the new behavior, which has the nice
effect of giving us decent coverage of all these core caching
functions which previously had been primarily tested manually.
These are leftovers from where we had default settings in the
settings.py file. Now that the files are separate those references to
"below" are not correct.
If ldap sync is run while ldap is misconfigured, it can end up causing
troublesome deactivations due to not finding users in ldap -
deactivating all users, or deactivating all administrators of a realm,
which then will require manual intervention to reactivate at least one
admin in django shell.
This change prevents such potential troublesome situations which are
overwhelmingly likely to be unintentional. If intentional, --force
option can be used to remove the protection.
We had a potentially nasty bug where we
weren't guaranteeing that all/stream/everyone
collated in consistent ways inside of
`compare_people_for_relevance`, which can
send certain types of sort algorithms into
an infinite loop. I doubt this ever happened
in practice, but it's obviously worth fixing.
Now we also have a clear tiebreaker between
any two all/everyone/stream mentions, which
is the idx field.
Finally, this should be a bit more efficient.
We don't have people named "all". Instead, we
create pseudo person objects with email/full_name
of "all" (along with some other fields). The tests
now reflect this.