The server sends down lists of unread message ids in various
buckets, and we now use those on the client to provide more
complete counts of unread messages.
This commit makes get_recipient_info() faster by never creating
Django ORM objects. We use the ORM to create a values query
instead, and then we iterate over the rows to create various
collections of ids.
In order to avoid lots of code duplication, this commit unifies
how we query UserProfile for PMs and streams. Prior to this
commit we were getting "wide" UserProfile objects out of
our memcached cache. Now we just go to the database with our
list of userids. The new approach at worst adds one hop to the
database for PMs, which aren't really a performance bottleneck
(compared to streams). And the new approach actually saves a
hop when both partners aren't in cache (plus we don't pay the
penalty of hitting the cache itself).
The performance improvement here is easy to measure for messages
to streams with many users, even with all the other activity
that goes on inside do_send_messages(). I took test_performance()
in test_messages.py, set num_extra_users to 3000, and consistently
measured a ~20% speedup in do_send_messages().
This commit also eliminates fetching of emails. We probably
could have done that in a prior commit, but in this commit it
is very explicit that we don't need it. While removing email
from the query is a no-brainer, it actually had a negigible
impact on performance. Almost all the savings here comes from
not create UserProfile objects.
This function returns a summary of recipient data for a message
that's being sent. It's mostly just moving code into the
old function called get_recipient_user_profiles().
This commit is necessary to prevent bringing back emails from the
DB for all N recipients of a message just to see if the feedback
bot is being invoked.
We calculate `service_bot_tuples` earlier in the function, so that
we don't need "full" UserProfile objects later in the function.
This is part of consolidating code that basically just needs to
triage user_ids.
This starts to phase out the need for UserProfile objects in
do_send_messages(). UserProfile objects are expensive to create
for large streams with lots of users. The objects in the code
before this commit aren't even full UserProfile objects.
This change mostly sets up future performance improvements, but
we also get a minor speedup here when we run a test with 3000
stream subscribers.
There is no reason for either render_incoming_message() or
render_markdown() to require full UserProfile objects just to
triage alert words.
By only asking for user_ids, we save extra queries in two
callpaths and we make it easier to start using user_ids in
do_send_messages().
This function is essentially a copy of get_recipient_user_profiles,
which is about to go away. The new function enforces the contract of
typing indicators, which is that they don't apply to streams, which
allows us to use a relatively simple approach for getting user
profile objects.
We are diverging this code, because the send-message path needs
more optimizations.
This change introduces an extra hop to the database, but it is
generally faster due to nuances of the DB and the ORM. It
also sets us up to optimize get_recipient_user_profiles() by
avoiding creating ORM objects.
I measured the impact of this using a stream with 3000
subscribers, half of whom were idle, and it speeds things up
by 10%.
This is needed in order to mock the method when testing
`custom_check.py`. The diff for this commit is a bit broken;
all it really does is moving the method out of `build_custom_checkers`.
Avoid a join to UserProfile here speeds up the query from
86ms -> 28ms when you analyze it with about 2000 mobile users
in a 5000-user realm.
We also avoid some code duplication here, since we filter
UserPresence for the same group of users as we filter
PushDeviceToken.
This avoids an O(N-squared) hit during presence queries. The speedup
here is probably negligible compared to everything else going on, but
sets are more semantically correct, anyway.
Before this commit, postgres would choose a non-optimal query
plan to find all presence rows belonging to a realm. We now
do an extra query to get the list of relevant user_ids, which allows
the next query to take advantage of UserPresence's index on
user_profile_id.
Here is the query plan for the offending query (this particular query isn't
verbatim from the code, but it's representative of the problem):
explain analyze
select client_id
from zerver_userpresence
INNER JOIN zerver_userprofile ON
zerver_userprofile.id = zerver_userpresence.user_profile_id
WHERE
zerver_userprofile.is_active and
zerver_userprofile.realm_id = 3;
Hash Join (cost=149.66..506.82 rows=5007 width=4) (actual time=48.834..121.215 rows=5007 loops=1)
Hash Cond: (zerver_userprofile.id = zerver_userpresence.user_profile_id)
-> Seq Scan on zerver_userprofile (cost=0.00..260.11 rows=5369 width=4) (actual time=0.009..24.322 rows=5021 loops=1)
Filter: (is_active AND (realm_id = 3))
Rows Removed by Filter: 3
-> Hash (cost=87.07..87.07 rows=5007 width=8) (actual time=48.789..48.789 rows=5010 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 196kB
-> Seq Scan on zerver_userpresence (cost=0.00..87.07 rows=5007 width=8) (actual time=0.007..24.355 rows=5010 loops=1)
Total runtime: 145.063 ms
You can see above that we're filtering on realm_id instead of using an index.
When you decompose the query into two queries, the total time is about 100ms, for a
savings of 33%. I imagine the savings would be even greater on an instance with lots
of realms. This was tested on dev with one really large realm and one tiny realm.
We were using `.order_by('user_profile_id', '-timestamp') in our
UserPresence query in get_status_dicts_for_query.
We don't need a full sort to produce the dictionary of statuses.
In fact the whole operation in Python is still O(N):
- divvy rows up to be per-user in an O(N) pass
- find max row for the 'aggregated' entry in an O(n) pass
per user
The one minor annoyance of this fix is that datetime_to_timestamp
is lossy, so if you naively call to_presence_dict before finding
the "max" row, you get test flakes if rows are created during the
same second. I decided to avoid calling to_presence_dict so there
are fewer moving parts, but there's still the ugly step of having
to remove the "dt" field from the final results.
The commit() call in fix() breaks migrations and tests (unless you
mock) due to outer transactions.
We now explicitly call commit() from the management command.