We have two different concepts of "idle", and this function
is based on the "presence" aspect of idleness. There is also
idleness in terms of a user having no current client
descriptors accepting messages, and we check that later in
the process for things like sending missed message emails.
This commit migrates all webhooks to use check_send_stream_message
instead of check_send_message. The only two webhooks that still
use check_send_message are our yo and teamcity webhooks. They
both use check_send_message for private messages.
check_send_stream_message is a simpler version of
check_send_message for sending messages where the addressee is
a stream. Instead of relying on Addressee.legacy_build,
check_send_stream_message uses Addressee.for_stream. Consequently,
it eschews many of check_send_message's kwargs that aren't needed
when the intended recipient of a message is a stream.
This isn't something that a user can ever modify, so it doesn't belong
in DEFAULT_SETTINGS. While we're at it, we align the appearance of
the email gateway in the docs with whether this setting in the docs
will be valid.
This commit switches to use sprite sheets for rendering emojis
in all the remaining places, i.e., message bodies and composebox
typeahead. This commit also includes some changes to notifications.py
file so that the spans used for rendering emojis can be converted
to corresponding image tags so that we don't break the emoji rendering
in missed message emails since we can't use sprite sheets there.
As part of switching the bugdown system to use sprite sheets, we need
to switch the name_to_codepoint mappings to match the new sprite
sheets. This has the side effect of fixing a bunch of emoji like
numbers and flag emoji in the emoji pickers.
Fixes: #3895.
Fixes: #3972.
These are long enough to still be self-explanatory (the only one I'm
at all in doubt about there is DEBG; I avoided "DBUG" because it reads
"BUG" which suggests a high-priority message, and those are the
opposite of that), while saving a good bit of horizontal space
vs. padding everything to the 8 characters of "CRITICAL".
Also add a linter exception to allow easy-to-read alignment here,
similar to several existing exceptions for other alignment cases.
This also gives us a place to hang the originating module, if we write a bit
of logic to work that out; sadly it doesn't come out of the box, only
the filename (which is likely to have a bunch of noise that just shows the
path to the deployment or virtualenv.)
This doesn't yet do much, but it gives us a suitable place to
add code to customize how log messages are displayed, beyond what
a format string passed to the default formatter can do.
This should make it a little easier to understand our logging config
and make changes to it with confidence.
Many of these items that are now redundant used to be required when we
were setting disable_existing_loggers to True (before 500d81bf2), in
order to exempt those loggers from being cleared out. Now they're not.
One bit of test code needed a tweak to how it got its hands on the
AdminZulipHandler instance; it can do it from the list on the root
logger just as well as on the `django` logger.
Most of the paths leading through this except clause were cut in
73e8bba37 "ldap auth: Reassure django_auth_ldap". The remaining one
had no test coverage -- the case that leads to it had a narrow unit
test, but no test had the exception actually propagate here. As a
result, the clause was mistakenly cut, in commit
8d7f961a6 "LDAP: Remove now-impossible except clause.", which could
lead to an uncaught exception in production.
Restore the except clause, and add a test for it.
Having Addressee take care of setting stream_name to
sender.default_sending_stream.name makes us able to have
the invariant that stream_name is never None when the
message type is 'stream', which will help for mypy, among
other things.
One thing to be aware of is that Addressee does do a little
bit of validation work, and this adds yet another JsonableError
exception. I don't view this as a bad thing, just something to
know.
TRELLO_MESSAGE_TEMPLATE and TRELLO_SUBJECT_TEMPLATE are
redundant. This commit removes them. Now, subjects don't end
in periods. And where a period is necessary in the message body,
one is appended at the end of the specific template for that
message.
This is just enough of a quick fix to work with a stock Zulip 1.6
server. We should really also make this robust to arbitrary input
from the remote Zulip server, even though it'll be a little tedious.
The dictionary result for get_user_info_for_message_updates()
now has a `mention_user_ids` field that is a set of user ids
who were mentioned in a message.
This checks what arguments it passes into the enqueuing function.
Note, however, that the arguments are wrong for various cases, we'll
update the tests as we fix those bugs.
This ensures that as we expand the logic for under what circumstances
email and push notifications should be sent, we can be confident about
this code path always doing the right thing.
This fixes a problem introduced in the recent refactoring where
`triggers` would not be set correctly when a push or email
notification was triggered by missedmessage_hook.
Fixes#6612.
Now, the two code paths do the same thing for this check.
It seems like there may be more work to do here, in that
wildcard_mentioned messages seem to not be eligible for sending
email/push notifications. We probably want to add some logic there
for the user doing the mention to control whether or not it does.
This makes GoogleSubdomainLoginTest consistently access subdomains the
standard way, replacing the original hacky approach it had that
predated the library.
There are several reasons to extract this function:
* It's easy to unit test without extensive mocking.
* It will show up when we profile code.
* It is something that you can mostly ignore for
most messages.
The main reason to extract this, though, is that we are about
to do some fairly complex splicing of data for the use case
of mentioning service bots on streams they are not subscribed to,
and we want to localize the complexity.
It's unlikely to be of any real consequence, but this code bugged me
in that it makes a whole set before throwing it away to make nearly
the same set.
Sadly Python's comprehensions lack a way to write these cleanly as one
comprehension; but with no extra code complexity we can make the
temporary a genexp, which does the job.
We need a migration to clear the tutorial_status for existing users,
so that we don't show hotspots to anyone who signed up for Zulip in
the month or so since we deleted the old tutorial.
This fixes a bug where the internal_prep_message code path would
incorrectly ignore the `realm` that was passed into it. As a result,
attempts to send messages using the system bots with this code path
would crash.
As a sidenote, we really need to make our test system consistent with
production in terms of whether the user's realm is the same as the
system realm.
We don't access any attributes of the sender other than the realm, and
as it turns out, we in some cases want to use a different realm than
the sender's.
The plan is to have everything expect subdomains, so it makes sense to
move these tests to the subdomains-only test class and style.
Most of the remaining GoogleLoginTest tests are now either duplicates
or basic API-level tests where subdomains are irrelevant.
Previously, this accessed realm.uri via trying to use
zulip_default_context. That doesn't make any sense, because
zulip_default_context expects an HttpRequest object, and those are
nowhere in sight in the code path. We do, however, have the outgoing
webhook bot user involved in the event, and that's the object to
access realm.uri from here.
These arguments are only intended to be used for realm creation, and
they make the code more confusing.
We need to make a few changes after doing this, because some tests
were relying on these extra arguments causing the form to not submit
for their error handling.
We don't apply these changes to the LDAP tests, since fixing those
seems complicated.
This commit implements support for rendering static files in
under static/generated/bots/ in the same manner as we render
our webhooks/integration documentation. Said static files are
generated by tools/setup/generate_zulip_bots_static_files.py
during provisioning.
This commit implements support for copying over static files
for all bots in the zulip_bots package to
static/generated/bots/ during provisioning. This directory
isn't tracked by Git. This allows us to have access to files
stored in an arbitrary zulip_bots package directory somewhere
on the system. For now, logo.* and doc.md files are copied over.
This commit should act as a starting point for extending our
macro-based Markdown framework to our bots/API packages'
documentation and eventually rendering these static files
alongside our webhooks' documentation.
This enforces our use of a consistent style in how we access Python
modules; "from os.path import dirname" is a particularly popular
abbreviation inconsistent with our style, and so it deserves a lint
rule.
Commit message and error text tweaked by tabbott.
Fixes#6543.
Previously, invitation reminder emails were only being cleared after a
successful signup if newsletter_data was available, since that was the
circumstance in which we were calling the relevant queue processor
code. Now, we (1) clear them when a human user finishes signing up
and (2) correctly clear them using the 'address' field of
ScheduleEmail, not user_id.
We don't need full Realm objects to find DefaultStream
objects for a realm. So now a few functions related to
adding/removing default streams use realm_id for lookups.
Similarly, we don't need a full Stream object to find
out if a stream exists in DefaultStream, so we do id
lookups there as well.
This sets us up to use thinner objects in callers.
We want to convert stream names to stream ids as close
to the "edges" of our system as possible, so we let our
caller do the work of finding the stream id for a stream
narrow.
We now have a dedicated cache for active_user_ids() that only
stores a list of user_ids.
Before this commit, active_user_ids() used a cache of UserProfile
dictionaries, so it incurred unnecessary deserialization costs for
all the user fields that it sliced away in a list comprehension.
Because the cache is skinnier here, we also need to invalidate it
less frequently. Basically, all we care about is new users, realm
deactivations, and user deactivations.
It's hard to measure how much this will improve performance, because
the speedup for any operation here is pretty minor, but we use this
function a lot, so hopefully it will make the overall system more
healthy.
This is mostly a preparatory commit for an upcoming optimization
related to stream data, but it probably does save us an
occasional DB hop to the realm table.
Previously, this was its own separate test script; now it's a normal
part of the test suite.
Tweaked by tabbott to use a proper test method.
Fixes#6327.
Previously, the bot domain was calculated correctly in most
circumstances, but if you were using the root domain, it would be
e.g. ".chat.zulip.org", not "chat.zulip.org". We fix this, with
perhaps more use of setting REALMS_HAVE_SUBDOMAINS than would be ideal
if we weren't about to set that True unconditionally.
This leads to more than a 2x speedup when tested with
20k+ total subscribers. (For large realms with lots of default
streams, this function deals with LOTS of data, so it is important
to optimize.)
This class encapsulates the mapping of stream ids to
recipient ids, and it is optimized for bulk use and
repeated use (i.e. it remembers values it already fetched).
This particular commit barely improves the performance
of gather_subscriptions_helper, but it sets us up for
further optimizations.
Long term, we may try to denormalize stream_id on to the
Subscriber table or otherwise modify the database so we
don't have to jump through hoops to do this kind of mapping.
This commit will help enable those changes, because we
isolate the mapping to this one new class.
This commit enables user to authenticate with any attribute set in
AUTH_LDAP_USER_SEARCH given that LDAP_EMAIL_ATTR is set to an email
attributes in the ldap server. Thus email and username can be
completely unrelated.
With some tweaks by tabbott to squash in the documentation and make it
work on older servers.
Moves SEND_ALL to inside get_next_hotspots, since it is not something other
files should call.
Also changes the delay to 0s, and gates the code behind an
`if settings.DEVELOPMENT`.
We were mostly excluding inactive users before this fix, but
now we completely ignore them.
This potentially changes some of the data we return from
get_recipient_info(), but the extra user ids before this fix
were effectively ignored by the caller.
The prior code would queue up feedback messages even if the
feedback bot was deactivated, which was just due to oversight
most likely. (People probably rarely disable the feedback bot,
but they should have that option.)
We now triage message content for possible mentions before
going to the cache/DB to get name info. This will create an
extra data hop for messages with mentions, but it will save
a fairly expensive cache lookup for most messages. (This will
be especially helpful for large realms.)
[Note that we need a subsequent commit to actually make the speedup
happen here, since avatars also cause us to look up all users in
the realm.]
I feel like getting notifications about a board's background being
changed isn't very useful information and could interrupt the flow
of other important information such as Card changes or movement,
so I think we should not support this event and
should simply ignore such payloads in the future.
This is a nonfunctional refactor, designed primarily to make it
simpler to extend this code path when we later add support for
controlling whether email notifications go out on stream messages.
Previously, due to a logic bug, this feature would also send email
notifications for all messages on the stream, which is definitely not
the intent. The recent refactoring we just did makes the logic more
obvious.
This creates a lot of logging noise, and also causes confusion
for new contributors when something isn't working as they expect
and they aren't sure if this message is normal or an error.
We should have done this a long time ago, but better late than never.
Basically, this migration would crash in the event that there were any
attachments with particularly long names. The fix is the next
migration, 0042; we just inline it here to avoid that crash.
This sets us up a subsequent commit where we need more data
from the Subscription table to build recipient info, so the
function boundary doesn't work any more for get_recipient_info,
which is part of the heavily optimized send-message
path.
We used to share code here with typing notifications, but
typing notifications need a lot less data than the
send-message path, so it's useful to decouple these two
things. The idioms that are duplicated here are pretty simple
one-liners.
This change optimizes get_status_dict_by_realm() by
introducing query_for_ids(), which quickly computes
an "IN" clause for user ids.
This change also inlines the `two_weeks_ago` check, but
that is just for clarity, not performance.
The prior version of this function was passed in a QuerySet, which
made it difficult to effectively profile the callers, and there
is really no compelling reason to pass in a query any more.
compilemessages command now does all the heavy lifting by creating a
language_name_map.json file under locale directory. This file is used
by get_language_list to retrieve the require information.
Fixes: #6486
This commit makes get_recipient_info() faster by never creating
Django ORM objects. We use the ORM to create a values query
instead, and then we iterate over the rows to create various
collections of ids.
In order to avoid lots of code duplication, this commit unifies
how we query UserProfile for PMs and streams. Prior to this
commit we were getting "wide" UserProfile objects out of
our memcached cache. Now we just go to the database with our
list of userids. The new approach at worst adds one hop to the
database for PMs, which aren't really a performance bottleneck
(compared to streams). And the new approach actually saves a
hop when both partners aren't in cache (plus we don't pay the
penalty of hitting the cache itself).
The performance improvement here is easy to measure for messages
to streams with many users, even with all the other activity
that goes on inside do_send_messages(). I took test_performance()
in test_messages.py, set num_extra_users to 3000, and consistently
measured a ~20% speedup in do_send_messages().
This commit also eliminates fetching of emails. We probably
could have done that in a prior commit, but in this commit it
is very explicit that we don't need it. While removing email
from the query is a no-brainer, it actually had a negigible
impact on performance. Almost all the savings here comes from
not create UserProfile objects.
This function returns a summary of recipient data for a message
that's being sent. It's mostly just moving code into the
old function called get_recipient_user_profiles().
This commit is necessary to prevent bringing back emails from the
DB for all N recipients of a message just to see if the feedback
bot is being invoked.
We calculate `service_bot_tuples` earlier in the function, so that
we don't need "full" UserProfile objects later in the function.
This is part of consolidating code that basically just needs to
triage user_ids.
This starts to phase out the need for UserProfile objects in
do_send_messages(). UserProfile objects are expensive to create
for large streams with lots of users. The objects in the code
before this commit aren't even full UserProfile objects.
This change mostly sets up future performance improvements, but
we also get a minor speedup here when we run a test with 3000
stream subscribers.
There is no reason for either render_incoming_message() or
render_markdown() to require full UserProfile objects just to
triage alert words.
By only asking for user_ids, we save extra queries in two
callpaths and we make it easier to start using user_ids in
do_send_messages().
This function is essentially a copy of get_recipient_user_profiles,
which is about to go away. The new function enforces the contract of
typing indicators, which is that they don't apply to streams, which
allows us to use a relatively simple approach for getting user
profile objects.
We are diverging this code, because the send-message path needs
more optimizations.
This change introduces an extra hop to the database, but it is
generally faster due to nuances of the DB and the ORM. It
also sets us up to optimize get_recipient_user_profiles() by
avoiding creating ORM objects.
I measured the impact of this using a stream with 3000
subscribers, half of whom were idle, and it speeds things up
by 10%.
Avoid a join to UserProfile here speeds up the query from
86ms -> 28ms when you analyze it with about 2000 mobile users
in a 5000-user realm.
We also avoid some code duplication here, since we filter
UserPresence for the same group of users as we filter
PushDeviceToken.
This avoids an O(N-squared) hit during presence queries. The speedup
here is probably negligible compared to everything else going on, but
sets are more semantically correct, anyway.
Before this commit, postgres would choose a non-optimal query
plan to find all presence rows belonging to a realm. We now
do an extra query to get the list of relevant user_ids, which allows
the next query to take advantage of UserPresence's index on
user_profile_id.
Here is the query plan for the offending query (this particular query isn't
verbatim from the code, but it's representative of the problem):
explain analyze
select client_id
from zerver_userpresence
INNER JOIN zerver_userprofile ON
zerver_userprofile.id = zerver_userpresence.user_profile_id
WHERE
zerver_userprofile.is_active and
zerver_userprofile.realm_id = 3;
Hash Join (cost=149.66..506.82 rows=5007 width=4) (actual time=48.834..121.215 rows=5007 loops=1)
Hash Cond: (zerver_userprofile.id = zerver_userpresence.user_profile_id)
-> Seq Scan on zerver_userprofile (cost=0.00..260.11 rows=5369 width=4) (actual time=0.009..24.322 rows=5021 loops=1)
Filter: (is_active AND (realm_id = 3))
Rows Removed by Filter: 3
-> Hash (cost=87.07..87.07 rows=5007 width=8) (actual time=48.789..48.789 rows=5010 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 196kB
-> Seq Scan on zerver_userpresence (cost=0.00..87.07 rows=5007 width=8) (actual time=0.007..24.355 rows=5010 loops=1)
Total runtime: 145.063 ms
You can see above that we're filtering on realm_id instead of using an index.
When you decompose the query into two queries, the total time is about 100ms, for a
savings of 33%. I imagine the savings would be even greater on an instance with lots
of realms. This was tested on dev with one really large realm and one tiny realm.
We were using `.order_by('user_profile_id', '-timestamp') in our
UserPresence query in get_status_dicts_for_query.
We don't need a full sort to produce the dictionary of statuses.
In fact the whole operation in Python is still O(N):
- divvy rows up to be per-user in an O(N) pass
- find max row for the 'aggregated' entry in an O(n) pass
per user
The one minor annoyance of this fix is that datetime_to_timestamp
is lossy, so if you naively call to_presence_dict before finding
the "max" row, you get test flakes if rows are created during the
same second. I decided to avoid calling to_presence_dict so there
are fewer moving parts, but there's still the ugly step of having
to remove the "dt" field from the final results.
The commit() call in fix() breaks migrations and tests (unless you
mock) due to outer transactions.
We now explicitly call commit() from the management command.
Usually a small minority of users are eligible to receive missed
message emails or mobile notifications.
We now filter users first before hitting UserPresence to find idle
users. We also simply check for the existence of recent activity
rather than borrowing the more complicated data structures that we
use for the buddy list.