This endpoint is part of the old tutorial, which we've removed, and
has some security downsides as well.
This includes a minor refactoring of the tests.
Sort of a hacky hammer, but
* The original design of the analytics system mistakenly attempted to play
nicely with non-UTC datetimes.
* Timezone errors are really hard to find and debug, and don't jump out that
easily when reading code.
I don't know of any outstanding errors, but putting a few "assert this
timezone is in UTC" around will hopefully reduce the chance that there are
any current or future timezone errors.
Note that none of these functions are called outside of the analytics code
(and tests). This commit also doesn't change any current behavior, assuming
a database where all datetimes have been being stored in UTC.
Previously, entering a non-UTC end time for a daily stat would give you
incorrect results. This is because:
* All daily stats are collected at and have end_times in the database in
midnight UTC.
* For daily stats, time_range returns a list of datetimes at midnight in the
timezone of its end argument. These datetimes are the only ones we look
for when looking for rows corresponding to the stat in the database.
* Previously, we passed on the end argument from the API to time_range,
without modification.
The logic to apply events to page_params['unread_msgs'] was
complicated due to the aggregated data structures that we pass
down to the client.
Now we defer the aggregation logic until after we apply the
events. This leads to some simplifications in that codepath,
as well as some performance enhancements.
The intermediate data structure has sets and dictionaries that
generally are keyed by message_id, so most message-related
updates are O(1) in nature.
Also, by waiting to compute the counts until the end, it's a
bit less messy to try to keep track of increments/decrements.
Instead, we just update the dictionaries and sets during the
event-apply phase.
This change also fixes some corner cases:
* We now respect mutes when updating counts.
* For message updates, instead of bluntly updating
the whole topic bucket, we update individual
message ids.
Unfortunately, this change doesn't seem to address the pesky
test that fails sporadically on Travis, related to mention
updates. It will change the symptom, slightly, though.
Instead of using `unified_reactions` mapping start using
`name_to_codepoint` mapping for converting emoji name to
codepoints. We were using `unified_reactions` mapping
because prior to emoji web PR `name_to_codepoint` mapping
was generated using emoji_map.json which contained old
codepoints but for reactions new codepoints were required
to display them using sprite sheets.
Create a new custom email backend which would automatically
logs the emails that are send in the dev environment as
well as print a friendly message in console to visit /emails
for accessing all the emails that are sent in dev environment.
Since django.core.mail.backends.console.EmailBackend is no longer
userd emails would not be printed to the console anymore.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
Postgres doesn't like them, we don't have an obvious way to escape
them, and they tend to be sent by buggy tools where it'd be better for
the user to get an error.
This fixes a 500 we were getting occasionally.
This commit switches to use sprite sheets for rendering emojis
in all the remaining places, i.e., message bodies and composebox
typeahead. This commit also includes some changes to notifications.py
file so that the spans used for rendering emojis can be converted
to corresponding image tags so that we don't break the emoji rendering
in missed message emails since we can't use sprite sheets there.
As part of switching the bugdown system to use sprite sheets, we need
to switch the name_to_codepoint mappings to match the new sprite
sheets. This has the side effect of fixing a bunch of emoji like
numbers and flag emoji in the emoji pickers.
Fixes: #3895.
Fixes: #3972.
This should make it a little easier to understand our logging config
and make changes to it with confidence.
Many of these items that are now redundant used to be required when we
were setting disable_existing_loggers to True (before 500d81bf2), in
order to exempt those loggers from being cleared out. Now they're not.
One bit of test code needed a tweak to how it got its hands on the
AdminZulipHandler instance; it can do it from the list on the root
logger just as well as on the `django` logger.
Most of the paths leading through this except clause were cut in
73e8bba37 "ldap auth: Reassure django_auth_ldap". The remaining one
had no test coverage -- the case that leads to it had a narrow unit
test, but no test had the exception actually propagate here. As a
result, the clause was mistakenly cut, in commit
8d7f961a6 "LDAP: Remove now-impossible except clause.", which could
lead to an uncaught exception in production.
Restore the except clause, and add a test for it.
This is just enough of a quick fix to work with a stock Zulip 1.6
server. We should really also make this robust to arbitrary input
from the remote Zulip server, even though it'll be a little tedious.
The dictionary result for get_user_info_for_message_updates()
now has a `mention_user_ids` field that is a set of user ids
who were mentioned in a message.
This checks what arguments it passes into the enqueuing function.
Note, however, that the arguments are wrong for various cases, we'll
update the tests as we fix those bugs.
This ensures that as we expand the logic for under what circumstances
email and push notifications should be sent, we can be confident about
this code path always doing the right thing.
This makes GoogleSubdomainLoginTest consistently access subdomains the
standard way, replacing the original hacky approach it had that
predated the library.
There are several reasons to extract this function:
* It's easy to unit test without extensive mocking.
* It will show up when we profile code.
* It is something that you can mostly ignore for
most messages.
The main reason to extract this, though, is that we are about
to do some fairly complex splicing of data for the use case
of mentioning service bots on streams they are not subscribed to,
and we want to localize the complexity.
The plan is to have everything expect subdomains, so it makes sense to
move these tests to the subdomains-only test class and style.
Most of the remaining GoogleLoginTest tests are now either duplicates
or basic API-level tests where subdomains are irrelevant.
Previously, this accessed realm.uri via trying to use
zulip_default_context. That doesn't make any sense, because
zulip_default_context expects an HttpRequest object, and those are
nowhere in sight in the code path. We do, however, have the outgoing
webhook bot user involved in the event, and that's the object to
access realm.uri from here.
These arguments are only intended to be used for realm creation, and
they make the code more confusing.
We need to make a few changes after doing this, because some tests
were relying on these extra arguments causing the form to not submit
for their error handling.
We don't apply these changes to the LDAP tests, since fixing those
seems complicated.
This commit implements support for rendering static files in
under static/generated/bots/ in the same manner as we render
our webhooks/integration documentation. Said static files are
generated by tools/setup/generate_zulip_bots_static_files.py
during provisioning.
This commit implements support for copying over static files
for all bots in the zulip_bots package to
static/generated/bots/ during provisioning. This directory
isn't tracked by Git. This allows us to have access to files
stored in an arbitrary zulip_bots package directory somewhere
on the system. For now, logo.* and doc.md files are copied over.
This commit should act as a starting point for extending our
macro-based Markdown framework to our bots/API packages'
documentation and eventually rendering these static files
alongside our webhooks' documentation.
This enforces our use of a consistent style in how we access Python
modules; "from os.path import dirname" is a particularly popular
abbreviation inconsistent with our style, and so it deserves a lint
rule.
Commit message and error text tweaked by tabbott.
Fixes#6543.
Previously, invitation reminder emails were only being cleared after a
successful signup if newsletter_data was available, since that was the
circumstance in which we were calling the relevant queue processor
code. Now, we (1) clear them when a human user finishes signing up
and (2) correctly clear them using the 'address' field of
ScheduleEmail, not user_id.
We don't need full Realm objects to find DefaultStream
objects for a realm. So now a few functions related to
adding/removing default streams use realm_id for lookups.
Similarly, we don't need a full Stream object to find
out if a stream exists in DefaultStream, so we do id
lookups there as well.
This sets us up to use thinner objects in callers.
We now have a dedicated cache for active_user_ids() that only
stores a list of user_ids.
Before this commit, active_user_ids() used a cache of UserProfile
dictionaries, so it incurred unnecessary deserialization costs for
all the user fields that it sliced away in a list comprehension.
Because the cache is skinnier here, we also need to invalidate it
less frequently. Basically, all we care about is new users, realm
deactivations, and user deactivations.
It's hard to measure how much this will improve performance, because
the speedup for any operation here is pretty minor, but we use this
function a lot, so hopefully it will make the overall system more
healthy.
This is mostly a preparatory commit for an upcoming optimization
related to stream data, but it probably does save us an
occasional DB hop to the realm table.
Previously, this was its own separate test script; now it's a normal
part of the test suite.
Tweaked by tabbott to use a proper test method.
Fixes#6327.
Previously, the bot domain was calculated correctly in most
circumstances, but if you were using the root domain, it would be
e.g. ".chat.zulip.org", not "chat.zulip.org". We fix this, with
perhaps more use of setting REALMS_HAVE_SUBDOMAINS than would be ideal
if we weren't about to set that True unconditionally.
This class encapsulates the mapping of stream ids to
recipient ids, and it is optimized for bulk use and
repeated use (i.e. it remembers values it already fetched).
This particular commit barely improves the performance
of gather_subscriptions_helper, but it sets us up for
further optimizations.
Long term, we may try to denormalize stream_id on to the
Subscriber table or otherwise modify the database so we
don't have to jump through hoops to do this kind of mapping.
This commit will help enable those changes, because we
isolate the mapping to this one new class.