Waseem is ok with removing the client-specific tabs on the
main /activity page. This reduces the number of queries from
25 to 1. We might eventually restore some of that logic, but
we will do it more efficiently. A lot of the data for
non-website clients is kind of unreliable, anyway.
The page looks kind of funny with only one tab, but that
will be fixed in the next commit.
(imported from commit 54f08f89d5242ad3e045d8ca0d97b86617c15380)
When we don't already have old messages in cache, we need to
fetch data from the database and create dictionaries for the
cache. This commit makes that process work in 50ms, instead
of 130ms, for the data set in test_bulk_message_fetching(),
which is 602 records. Before this commit we had about 132
microseconds of unnecessary churn per message, because we
were fetching DB fields we didn't need and incurring the cost
of the Django ORM. Now we use values() to get only the columns
we need, and we take advantage of previous commits that make
our code less OO and more function-driven, so we can pass the
values directly to build_message_dict() without having to create
objects.
A couple caveats on this commit:
1) I haven't been able to get good measurements on the overall
effect on get_old_messages_backend(). If you kill the cache to
force DB queries, you introduce noise related to sessions and
user profiles.
2) Look at the long comment in this commit related to
re-rendering messages in this codepath. The problem precedes
this commit.
(imported from commit dcb64aa9416f0e9583355ddd6dc3adfa746b9fc7)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We have a handy bulk_add_subscriptions function to make cases
like this fast, so lets use it.
On my machine this reduces the number of db queries during account_register
from 112 to 66.
(imported from commit 21a6b31d0f229998d095735b8c581a50ca6aab66)
It shows domains and how many active users they have. A
user is consider active if they have done something at least
as active as updating their pointer in the last day. Domains
with no meaningful activity in the last two weeks are excluded
from the report.
(imported from commit 700cecfc7f1732e9ac3ea590177da18f75b01303)
A small functional change here was to eliminate an enormous "Usage"
headline that was already implicit from the tabs. It would have
complicated the refactoring to try to preserve it, and I don't think
anyone will miss it.
Extracting this template will give us a little more flexibility
to customize future tabs in the /activity page.
(imported from commit bdb0b7030c8ec1e20d4451dc059830c3f5ea7632)
We are still showing the same data points, but the logic to drill
down on details for a particular realm is now all server side,
not client side, and we are smarter about omitting fields. In
summary mode, we don't show empty Name or Email columns. In
detailed mode, we show the realm as a headline instead of a column.
In this version you do lose the ability to see all system users in
the same view, but Waseem is ok with this.
(imported from commit edd2e646ab4cf5783ea64232d0cd621debece8d4)
When you load the activity report, it will just show summary
counts for realms, but if you click on a realm, you will see
details about users in the realms. You can also click "Show all"
to see an interleaved view of realms and users.
(imported from commit b106557b1fae64d525071afc124b5a8aed319086)
Add rows to the activity report that roll up counts for all
users on each realm, to go along with individual users.
(imported from commit 8104f3ef7fbe406fe0fd2ba1bb00ce76a1ccbee5)
get_subscribers_backend() now calls the new get_subscriber_emails()
function, which just queries the email field:
"zerver_userprofile"."email"
...instead of querying about 40 fields that it never uses.
I was able to verify the query slimming by watching my postgres server log.
Also, you can verify that the ORM does roughly 16x less work using values():
>>> def f(): return [sub.user_profile.email for sub in list(Subscription.objects.all().select_related())]
...
>>> def g(): return [row['user_profile__email'] for row in list(Subscription.objects.all().values('user_profile__email'))]
...
>>> def timeit(func): t = time.time(); func(); return time.time() - t
...
>>> timeit(f)
0.045198917388916016
>>> timeit(g)
0.002752065658569336
(imported from commit a69f690a96d076b323fdfc2f4821b0548bdfac7f)
These engagement data will be useful both for making pretty graphs of
how addicted our users are as well as for allowing us to check whether
a new deployment is actually using the product or not.
This measures "number of minutes during which each user had checked
the app within the previous 15 minutes". It should correctly not
count server-initiated reloads.
It's possible that we should use something less aggressive than
mousemove; I'm a little torn on that because you really can check the
app for new messages without doing anything active.
This is somewhat tested but there are a few outstanding issues:
* Mobile apps don't report these data. It should be as easy as having
them send in update_active_status queries with new_user_input=true.
* The semantics of this should be better documented (e.g. the
management script should print out the spec above)x.
(imported from commit ec8b2dc96b180e1951df00490707ae916887178e)
The new version is more accurate (doesn't rely on UserMessage IDs
being sorted, which they aren't necessarily) and simpler.
(imported from commit 671dd89dc8881ae2dcb8d0e804fd65458e074a29)