This method was new in Tornado 4.0. It saves us from having to get
the time ourselves and do the arithmetic -- which not only makes the
code a bit shorter, but also easier to get right. Tornado docs (see
http://www.tornadoweb.org/en/stable/ioloop.html) say we should have
been getting the time from `ioloop.time()` rather than hardcoding
`time.time()`, because the loop could e.g. be running on the
`time.monotonic()` clock.
Adding it afterward is inherently racy, and upstream's API is quite
reasonable for avoiding that -- just like we can pass an on-open
callback up front, we can do the same with the on-close callback.
This is a more thorough version of 4adf2d5c2 from back in 2013-04.
The default value of this parameter is already False upstream.
(It was already False in pika version 0.9.6, which we were
supposedly using when we introduced this in 4baeaaa52; not sure
what the story was there.)
Previously, we weren't doing a proper left join in
user_groups_in_realm_serialized, resulting in empty user groups being
excluded from the query. We want to leave decisions about excluding
empty user groups to the UI layer, so we include these here.
Because we use access_stream_by_id here, and that checks for an active
subscription to interact with a private stream, this didn't work.
The correct fix to add an option to active_stream_by_id to accept an
argument indicating whether we need an active subscription; for this
use case, we definitely do not.
[Modified by greg to (1) keep `USERNAME_FIELD = 'email'`,
(2) silence the corresponding system check, and (3) ban
reusing a system bot's email address, just like we do in
realm creation.]
As we migrate to allow reuse of the same email with multiple realms,
we need to replace the old "no email reuse" validators. Because
stealing the email for a system bot would be problematic, we still ban
doing so.
This commit only affects the realm creation logic, not registering an
account in an existing realm.
Originally this used signals, namely SIGRTMIN. But in prod, the
signal handler never fired; I debugged fruitlessly for a while, and
suspect uwsgi was foiling it in a mysterious way (which is kind of
the only way uwsgi does anything.)
So, we listen on a socket. Bit more code, and a bit trickier to
invoke, but it works.
This was developed for the investigation of memory-bloating on
chat.zulip.org that led to a331b4f64 "Optimize query_all_subs_by_stream()".
For usage instructions, see docstring.
We would allow a user with a valid invitation for one realm to use it
on a different realm instead. On a server with multiple realms, an
authorized user of one realm could use this (by sending invites to
other email addresses they control) to create accounts on other
realms. (CVE-2017-0910)
With this commit, when sending an invitation, we record the inviting
user's realm on the PreregistrationUser row; and when registering a
user, we check that the PregistrationUser realm matches the realm the
user is trying to register on. This resolves CVE-2017-0910 for
newly-sent invitations; the next commit completes the fix.
[greg: rewrote commit message]
This fixes some subtle JavaScript exceptions we've been getting in
zulipchat.com, caused by the system bot realm there not being "zulip"
interacting with get_cross_realm_users.
This should help protect us from future issues with the way that
`bulk_get_users` does caching.
It's likely that we'll want to further restructure `bulk_get_users` to
not have this base_query code path altogether (since it's kinda
buggy), but I'm going to defer that for a time when we have another
user.
The previous implementation had a subtle caching bug: because it was
sharing its cache with the `get_user_profile_by_email` cache, if a
user happened to have an email in that cache, we'd return it, even
though that user didn't match `base_query`.
This causes `get_cross_realm_users` to no longer have a problematic
caching bug.
Hides URL if the message content == image url so that sending gifs or
images feels less cluttered. Uses the url_to_a() function to generate
the expected url string for matching.
Fixes#7324.
We include ERROR_BOT in this set, even though it's not technically
cross-realm (it just lives in the admin realm).
This code path does not correctly handle emails that correspond to
multiple accounts (because `get_system_bot` does not). Since it's
intended to only be used by system bots, we add an appropriate
assertion to ensure it is only used for system bots.
This was causing problems, because internal_send_message assumes that
there is a unique user (across all realms) with the given email
address (which is sorta required to support cross-realm bot messages
the way it does).
With this change, it now, in practice, only sends cross-realm bot
messages.
Previously, this was a ValidationError, but that doesn't really make
sense, since this condition reflects an actual bug in the code.
Because this happened to be our only test coverage the ValidationError
catch on line 84 of registration.py, we add nocoverage there for now.
This buggy logic from e1686f427c had
broken do-destroy-rebuild-test-database.
Now that we're not just trying to add the Recipient objects for every
user on the system here to profiles_by_id, we also shouldn't be
processing every Recipeint object on the system. The fix is simple:
because of the patch we got merged into Django upstream,
recipients_to_create actually has the object IDs added to the
Recipient objects passed into Recipient.objects.bulk_create.
This was missed in manual testing, since it only broke `populate_db
--test-suite`.
An Integration object doesn't need access to the context dict used
to render its doc.md, since the context dict is just passed directly to
render_markdown_path.
This fixes a bug where, when a user is unsubscribed from a stream,
they might have unread messages on that stream leak. While it might
seem to be a minor problem, it can cause significant problems for
computing the `unread_msgs` data structures, since it means we need to
add an extra filter for whether the user is still subscribed, either
in the backend or in the UI.
Fixes#7095.
Inorder to provide more explicit error messages I have merged the
`emoji_code_is_valid()` and `emoji_name_is_valid()` functions into
`check_emoji_code_consistency()` and `check_emoji_name_consistency()`
respectively.
This often can cause minor caching problems.
Obviously, it'd be better if we had access to the AST and thus could
do this rule for UserProfile objects in general.
Instead of populating the context dict with integration-specific
information in render_markdown_path, we now do that in
zerver.views.integrations.integration_doc instead.
Fixes#7401.
Tweaked by tabbott to use cast to handle the typing issues here.
The character ">" now only starts a blockquote if the resulting
blockquote would be non-empty. Thus, by itself, ">" is now
interpreted literally by bugdown, fixing #687. The message
with contents consisting of ">>>" is now parsed as a doubly
(not triply) nested blockquote with contents ">". Properly
formed blockquotes have identical behavior as before, but now
bugdown can no longer produce empty blockquotes as output.
Fixes#2886, #687.
Storage limititations are only set on the value of
a config entry, since this is the only user-accessible
part of the schema. Keys are statically set by each
embedded bot.
This endpoint will allow us to add/delete emoji reactions whose emoji
got renamed during various emoji infra changes. This was also a
required change for realm emoji migration.
This commit was tweaked significantly by tabbott for greater clarity
(with no changes to the actual logic).
When the RabbitMQ server disappears, we log errors like these:
```
Traceback (most recent call last):
File "./zerver/lib/queue.py", line 114, in json_publish
self.publish(queue_name, ujson.dumps(body))
File "./zerver/lib/queue.py", line 108, in publish
self.ensure_queue(queue_name, do_publish)
File "./zerver/lib/queue.py", line 88, in ensure_queue
if not self.connection.is_open:
AttributeError: 'NoneType' object has no attribute 'is_open'
During handling of the above exception, another exception occurred:
[... traceback of connection failure inside the retried self.publish()]
```
That's a type error -- a programming error, not an exceptional
condition from outside the program. Fix the programming error.
Also move the retry out of the `except:` block, so that if it also
fails we don't get the exceptions stacked on each other. This is a
new feature of Python 3 which is sometimes indispensable for
debugging, and which surfaced this nit in the logs (on Python 2 we'd
never see the AttributeError part), but in some cases it can cause a
lot of spew if care isn't taken.
This commit helps reduce clutter on the navigation sidebar.
Creates new directories and moves relevant files into them.
Modifies index.rst, symlinks, and image paths accordingly.
This commit also enables expandable/collapsible navigation items,
renames files in docs/development and docs/production,
modifies /tools/test-documentation so that it overrides a theme setting,
Also updates links to other docs, file paths in the codebase that point
to developer documents, and files that should be excluded from lint tests.
Note that this commit does not update direct links to
zulip.readthedocs.io in the codebase; those will be resolved in an
upcoming follow-up commit (it'll be easier to verify all the links
once this is merged and ReadTheDocs is updated).
Fixes#5265.
While fixing an issue related to email gateway messages not getting
rendered properly, I unknowingly introduced a bug in the markdown
engine updation code. This commit fixes it. The issue was that for
a realm having email gateway setup, updation of realm filters would
lead to the updation of only one of the markdown engines not both.
In remove_members_from_group_backend, we are passing user group to
remove_members_from_user_group. In remove_members_from_user_group,
expect user_group_id.
This fixes a regression in ae5ba7f4fd,
where Zulip would 500 if the newly added system bots didn't exist on
the server.
This also fixes a moderate size performance problem where we'd fetch 5
users from memcached or the database in a loop.
The intended use of $$ is for inline expressions, not for multiline
ones; ```math is an acceptable alternative for the latter. Hence,
the $$-syntax for inline TeX no longer permits newlines within it.
This was also necessary for the next change to be sensible; namely
allowing for spaces around both $$ when crafting inline TeX instead of
forcing everything to be crammed together, e.g. $$x=7$$. In order to
avoid uninentionally creating inline expressions, the opening and
closing $$'s of an inline expression must now both exactly consist of
two dollar signs, no more and no less.
Fixes: #6488.
Previously, these push notification events were being generated, but
then ignored in handle_push_notification because there was no
user_message object.
Generally emails are not written with markdown in mind and hence
sometimes render in strange ways. This commit fixes a particular
issue that was causing whitespace before paragraphs to be treated
as code block due to which email content was being rendered in a
box that scrolls in right direction a lot.
Fixes: #7045.
This change affects realm_users and realm_non_active_users.
Note that we still send full avatar urls in realm_user/add
events, so apply_events has to do something mildly hacky to
turn the avatar_url to None in that case.
Fixing the event is probably not worth the trouble, as single
urls are not bandwidth hogs; we only need this optimization
for bulk data.
This change affects these values:
* page_params.avatar_url
* page_params.avatar_url_medium
It requires passing the client_gravatar flag through this
codepath:
* home_real
* do_events_register
* fetch_initial_state_data
* avatar_url
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
The main limitation of this version is that it's controlled entirely
from settings, with nothing in the database and no web UI or even
management command to control it. That makes it a bit more of a
burden for the server admins than it'd ideally be, but that's fine
for now.
Relatedly, the web flow for realm creation still requires choosing a
subdomain even if the realm is destined to live at an alias domain.
Specific to the dev environment, there is an annoying quirk: the
special dev login flow doesn't work on a REALM_HOSTS realm. Also,
in this version the `add_new_realm` and `add_new_user` management
commands, which are intended for use in development environments only,
don't support this feature.
In manual testing, I've confirmed that a REALM_HOSTS realm works for
signup and login, with email/password, Google SSO, or GitHub SSO.
Most of that was in dev; I used zulipstaging.com to also test
* logging in with email and password;
* logging in with Google SSO... far enough to correctly determine
that my email address is associated with some other realm.
The original PR to allow generic bots to be mentioned had
some merge issues that we detected about a week after the
fact. This commit restores the logic from the original PR.
The reason we didn't detect this bug earlier is that the
merge issues didn't break any existing behavior. Instead,
they made it so that only UserMessage rows got written for
bots, but no events were being set. The part of the commit
that got lost is restored here, so now events get sent as
well.
Thanks to @derAnfaenger for reporting this and being patient
as we tracked it down.
Fixes#7140
This adds the data model and bugdown support for the new UserGroup
mention feature.
Before it'll be fully operational, we'll still need:
* A backend API for making these.
* A UI for interacting with that API.
* Typeahead on the frontend.
* CSS to make them look pretty and see who's in them.
Because this is for tests, a heuristic like this that's right in most
situations is actually fine; we can override it in the few cases where
a test might set up a situation where it fails.
So just make it clear for the next reader that that's what's going on,
and also adjust the helper's interface slightly so that its callers
do have that flexibility.
We extract get_bulk_stream_subscriber_info() from this
function to remove some of the complexity. Also, in that
new function we avoid a hop to the database by querying
on stream ids instead of recipient ids. The query that
gets changed here does require a join to the recipient
table (to get the stream id), so it's a little bit of a
tradeoff.
There's an implicit assumption in bulk_remove_subscriptions
that all users belong to the same realm. We use the realm
for things like comparing occupied streams before and
after our main operation of deactivating streams.
Before this change, we just used the user_profile variable
that leaked from some prior loop to look up the realm, which
was super brittle.
Now we're a bit more explicit.
We were using Google's diff-match-patch library to diff HTML. The
problem with that approach is that it is a text differ, not an HTML
differ and so it ends up messing up the HTML tags. `lxml` is a safer
option.
Fixes: #7219.
Note that this code leads to a slightly different query, because
we join to one row in the small Recipient table to match
stream_id to recipient.type_id.
The first method we extract to this library is
get_active_subscriptions_for_stream_id().
We also move num_subscribers_for_stream_id() to here, which
is slightly annoying (having the method on Stream was nice)
but avoids some circular dependency issues.
FuncT was unused in decorator.py, and only imported into profile.py.
The @profiled decorator is now more strongly typed on return-type.
Annotations were converted to python3 format.
This extraction moves all the huddle logic into models.py, which
hopefully can reduce friction for things like re-organizing our
caches (there are two cache entries for every huddle) and/or
just putting huddle_id on Message directly.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
Almost all callers to do_create_user were trying to
create active users, except for one test. The
active=False codepath was kind of broken (things
like sending welcome messages had sort of undefined
behavior there), so instead of trying to maintain it,
we just update the one test (`test_people`) to flip the
`is_active` flag manually.
Fixes#7197
Along with fixing some minor bugs, this requires extracting out the
default functions so that we can do type: ignores on them properly.
While we're at it, we switch to the Python 3 syntax.
If a Zulip install at example.org got a request at an HTTP `Host`
like foo.example.org.evil.com (or even foo.example.orgevil.com),
we would accept it as subdomain foo. This isn't likely to happen
in practice because it shouldn't pass ALLOWED_HOSTS, and it's not
obvious to me that anything untoward could be done with it even
if ALLOWED_HOSTS were set wide open, but if nothing else it
multiplies the cases in analyzing this logic.
The reason we had a loose match like this, I assume, is to allow
the user to come from arbitrary ports -- especially in development.
So tighten the pattern to allow just that, and add some tests for
that behavior and a comment explaining why this complication is
needed.
The cookie mechanism only works when passing the login token to a
subdomain. URLs work across domains, which is why they're the
standard transport for SSO on the web. Switch to URLs.
Tweaked by tabbott to add a test for an expired token.
Lets administrators view a list of open(unconfirmed) invitations and
resend or revoke a chosen invitation.
There are a few changes that we can expect for the future:
* It is currently possible to invite an email that you have already
invited, it might make sense to change this behavior.
* Resend currently sends an invite reminder instead of resending the
original invite, this is because 'custom_body' was not stored when
the first invite was sent.
Tweaked in various minor ways, primarily in the backend, by tabbott,
mostly for style consistency with the rest of the codebase.
Fixes: #1180.
Tweaked by tabbott to have the field before the invitation is
completed be called invite_as_admins, not invited_as_admins, for
readability.
Fixes#6834.
The tighter interface prevents the need to specify
Recipient.PERSONAL (which can often be inaccurate in the
huddle case, anyway), and it prevents tests from confusingly
specifying a "subject" field for PMs.
Having send_stream_message() avoids the need to supply
Recipient.STREAM as a parameter, and it also uses the more
modern name of `topic_name` for topics. Under the hood, it
avoids some annoying steps for re-formatting the recipients,
since we just have a single stream name.
This change allows normal bots to get UserMessage rows when
they are mentioned on a stream, even if they are not actually
subscribed to the stream.
Fixes#7140.
We now find all (possibly) relevant service bots for a message
in the call to get_recipient_info. This allows us to eliminate
some code that would patch them after we rendered.
The get_service_bot_events() function will ignore any service
bots that weren't actually mentioned in the message (due to
backticks) or part of the active user ids.
We now have a MentionData class that encapsulates
the users who are possibly mentioned in a message.
Not that the rendering code may not keep all the mentions,
since things like backticks will suppress the mention.
We populate this now in do_send_messages, so that we can use
the info earlier in the message-sending process. This info
now gets passed down the call stack as an optional parameter.
Note that bugdown.convert() still populates the data when its
callers decline to pass in a MentionData object.
This is mostly a preparatory commit, as we don't take advantage
of the data yet in do_send_messages.
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Wherever possible, we always want to move checking for error
conditions to the views code, so that we don't need to worry about
handling failures with (in this case) a user that's half-created
because a DefaultStreamGroup doesn't exist.
This effectively implements the feature of default stream groups,
except for a UI, nice styling, etc.
Note that we're careful to not have this do anything in an
organization that doesn't have any default stream groups.
These are just instances that jumped out at me while working on the
subdomains code, mostly while grepping for get_subdomain call sites.
I haven't attempted a comprehensive search, and there are likely
still others left.
Now that the old `check_subdomain` has no callers except in
implementing the new, improved interface `user_matches_subdomain`,
inline it into that. Also simplify the Boolean logic a bit.
Now that every call site of check_subdomain produces its second
argument in exactly the same way, push that shared bit of logic
into a new wrapper for check_subdomain.
Also give that new function a name that says more specifically what
it's checking -- which I think is easier to articulate for this
interface than for that of check_subdomain.
This fixes an exception occurring when engaging an embedded
bot in a PM, makes it respond as itself instead of the sender,
and makes it respond to the PM conversation it is engaded in.
Every time we updated a UserProfile object, we were calling
delete_display_recipient_cache(), which churns the cache and
does an extra database hop to find subscriptions. This was
due to saying `updated_fields` instead of `update_fields`.
This made us prone to cache churn for fields like UserProfile.pointer
that are fairly volatile.
Now we use the helper function changed(). To prevent the
opposite problem, we use all the fields that could invalidate
the cache.
We now add `realm_non_active_users` to the result of
`do_events_register` (and thus `page_params`). It has
the same structure as `realm_users`, but it's for
non-active users. Clients need data on non-active users
when they process old messages that were sent by those
users when they were active. Clients can currently get
most of the data they need in the message events, but it
makes for ugly client code.
Fixes#4322
This is a prepatory commit that adds non-active users to
the realm user cache. It mostly involves name changes and
removing an `is_active` filter from the relevant DB query.
The only consumer of this cache is `get_raw_user_data`, which
now filters on `is_active` in a dictionary comprehension (but
this will get moved around a bit in a subsequent commit).
We make a few things cleaner for populating `realm_users`
in `do_event_register` and `apply_events`:
* We have a `raw_users` intermediate dictionary that
makes event updates O(1) and cleaner to read.
* We extract an `is_me` section for all updates that
apply to the current user.
* For `update` events, we do a more surgical copying
of fields from the event into our dict. This
prevents us from mutating fields in the event,
which was sketchy (at least in test mode). In
particular, this allowed us to remove some ugly
`del` code related to avatars.
* We introduce local vars `was_admin` and `now_admin`.
The cleanup had two test implications:
* We no longer need to normalize `realm_users`, since
`apply_events` now sees `raw_users` instead. Since
`raw_users` is a dict, there is no need to normalize
it, unlike lists with possibly random order.
* We updated the schema for avatar updates to include
the two fields that we used to hackily delete from
an event.
If an organization doesn't have the EmailAuthBackend (which allows
password auth) enabled, then our password reset form doesn't do
anything, so we should hide it in the UI.
This new test solves the problem that when we
made changes to the page-load codepath in the past,
it's been hard to identify what new code caused
more database queries. Now you can see query
counts broken out by event type.
This requires a small, harmless change to extract
an `always_want` function in `lib/events.py`.
Clients fetching messages can now specify that they are able
to compute their avatar, and if they set client_gratavar to
True in the request (w/our normal encoding scheme), then the
backend will not compute it, and the payload will be smaller.
The fix starts with get_messages_backend. The flag gets
passed down through these functions:
* MessageDict.post_process_dicts.
* MessageDict.set_sender_avatar.
We also fix up the callers for post_process_dicts to explicitly
pass in the client_gravatar path, but for now they all just hard
code the value to False.
We have been assigning locale to language code. Mostly code and locale
are same but for languages like zh-Hans, locale is zh_Hans and code is
zh-hans.
After this commit, compilemessages command should be run.
This replaces the former non-functional StateHandler
stub with a dictionary-like state object. Accessing it will
will read and store strings in the BotUserStateData model.
Each bot has a limited state size. To enforce this limit while
keeping data updates efficient, StateHandler caches the expensive
query for getting a bot's total state size. Assignments to a key
then only need to fetch that entry's previous size, if any, and
compare it to the new entry's size.
I think an hour after signup is not the right time to try to get someone to
re-engage with a product.
This also makes the day1 email clearly a transactional email both in
experiencing the product and in the eyes of various anti-spam laws, and
allows us to remove the unsubscribe link.
The rules here are fuzzy, and it's quite possible none of Zulip's emails
need an address at all. Every country has its own rules though, which makes
it hard to tell. In general, transactional emails do not need an address,
and marketing emails do.
This modifies the realm creation form to (1) support a
realm_in_root_domain flag and (2) clearly check whether the root
domain is available inside check_subdomain_available before trying to
create a realm with it; this should avoid IntegrityErrors.
This removes the utterly unnecessary `triggers` dict (which always was
a dict with exactly one value True) in favor of a single field,
'trigger'.
Inspired by Kunal Gupta's work in #6659.
This should mean that maintaining two Zulip development environments
using the same Git checkout no longer has caching problems keeping
track of the migration status.
Previously, to check whether a logo file existed, we simply took
the static/ URL for the logo and treated it as a file path. This
led to problems when static/* was not the correct parent directory
for our static files (for example, when settings.PRODUCTION = True).
Now, we treat URLs and file paths differently and the logo file
path is constructed by joining settings.STATIC_ROOT and the
relative path to the logo file.
Fixes#7018.
This change makes the cache entries smaller for message
dictionaries. It also ensures we get valid data put into
message dictionaries if, for example, the sender's avatar
changes.
After this change, all of the attributes for a message
sender are only fetched during post-processing with two
exceptions:
* We get sender_id for "free" from the message,
and it's the primary key that we need to figure
out which data to fetch in post-processing.
* We need sender_realm_id to be able to cache topic
links, and a sender's realm id will never change,
so it's not a concern for invalidating cache rows.
All the other attributes are either likely to change (e.g.
sender avatar_version) and/or impact the size of cache
entries more severely than the two small id fields above.
This change should improve our overall system performance
by reducing the amount of memory used by every N message
rows we cache, and typically N will be in the thousands or
so on a large realm.
The other major implication of this change is that when
a user changes their avatar, and then later messages that
the user sent are fetched, all of the fields that go into
computing the avatar url will be pulled from the database,
not from cache.
Message.get_raw_db_rows is moved to MessageDict, since its
implementation details are highly coupled to other methods
in MessageDict.
And then sew_messages_and_reactions comes along for the
ride.
We eventually want to move Reaction.get_raw_db_rows to there
as well.
We now populate the avatar url as part of the post
processing step of building message dictionaries,
so that the avatar url is no longer in cache.
This change makes the cache slimmer, because instead
of caching the avatar url (which often includes a long
hash), we just cache the smaller fields that are used
to compute the url.
Note that this commit still has the problem that we're
essentially computing the avatar url from cached fields
that can be invalid. We will address that a few commits
later.
An immediate benefit of this change is that how we compute
avatar urls (or whether we compute them all) is now decoupled
from caching concerns. We will address this later as
well. (Some clients will be capable of computing their
own gravatar urls, for example.)
We're about to have multiple post-processing stages for building
message dictionaries. Rather than having individual "hydration"
methods remove intermediate values, we just wait until the end.
This decouples the hyrdration steps. The potentional problem
here is that we may have a field like sender_is_mirror_dummy
that isn't part of the final payload, but we need it for
calculating display recipients and avatars. We don't want to
delete it too early from the objects.
This makes tests of queue processors more realistic,
by adding a parameter to `queue_json_publish` that
calls a queue's consumer function if accessed in a test.
Fixes part of #6542.
Nobody has used this feature in years, and it causes certain types of
markdown issues in development to completely DoS the development
environment by making it possible for the "Bugdown timeout" exception
handler to timeout in bugdown.
Since we already send an email to the server administrators, there's
no need to replace this feature with anything.
This function is designed to replace avatar_url() and
avatar_url_from_dict() over time.
There are a few things new about it:
* We make the parameters more explicit, rather than
passing in an opaque dictionary or requiring a
UserProfile object. (A lot of our callers want
to use `values()` for efficiency sake, since we
are often doing bulk user operations.)
* We start to support the client_gravatar option.
The `is_mentioned` flag in message events was buggy. We now
look directly at flags.
We will kill off `is_mentioned` in a subsequent commit.
We also remove some debugging code in the test that was failing
before this fix. The test would only fail when `is_mentioned`
was wrong, which never happened when you ran a single test, and
which would happen randomly when you ran multiple tests.
Add this field to the Stream model will prevent us from having
to look at realm data for several types of stream operations, which
can be prone to either doing extra database lookups or making
our cached data bloated.
Going forward, we'll set stream.is_zephyr to True whenever the
realm's string id is "zephyr".
This removes sender names from the message cache, since
they aren't guaranteed to be valid, and they're inexpensive
to add.
This commit will make the message cache entries smaller
by removing sender___full_name and sender__short_name
fields.
Then we add in the sender fields to the message payloads
by doing a query against the unique sender ids of the
messages we are processing.
This change leads to 2 extra database hops for most of
our message-related codepaths. The reason there are 2 hops
instead of 1 is that we basically re-calculate way too
much data to get a no-markdown dictionary.
Introduce MessageDict.post_process_dicts() will allow us
the ability to do the following:
* use less memory in the cache for repeated data
* prevent cache invalidation
* format data according to different client needs
The first use of this function is pretty inconsequential, but
it sets us up for more consequential changes.
In this commit we defer the MessageDict.hydrate_recipient_info
step until after we pull data out of the cache. This impacts
cache size as follows:
* streams - negligibly bigger
* PMs/huddles - slimmer due to not needing to repeat
sender data like email/full_name
Again, the main point of this change is to start setting up
the infrastructure to do post-processing.
This is a first step to eventually slimming the message cache,
but there are still some moving parts there to be worked through.
The more immediate benefit of extracting this function is that
we can put tests on it. Also, it isolates some functionality
that may go away as our clients gets smarter.
Since subscribed_to_stream is only doing an id lookup
on the Stream model to find out if a user is subscribed to
a stream, there's no reason to require a full Stream object.
It's currently the case that all callers do have full Stream
objects handy to pass in to this function, but it's still a
good practice to have functions only ask for objects that they
need.
We now return user_ids for subscribers to streams in add-stream
events. This allows us to eliminate the UserLite class for
both bulk adds and bulk removes. It also simplifies some JS
code that already wanted to use user_ids, not emails.
Fixes#6898
This function truncates the textual content at correct length.
(It will be updated later to handle corner cases of unicode
combining characters and tags when we start supporting them.)
Using lightweight objects will speed up adding new users
to realms.
We also sort the query results, which lets us itertools.groupby
to more efficiently build the data structure.
Profiling on a large data set shows about a 25x speedup for this
function, and before the optimization, this function accounts
for most of the time spend in bulk_add_subscriptions.
There's a lot less memory to allocate. I didn't measure
the memory difference.
When we test-deployed this to chat.zulip.org, we got about a 6x
speedup.
Sort of a hacky hammer, but
* The original design of the analytics system mistakenly attempted to play
nicely with non-UTC datetimes.
* Timezone errors are really hard to find and debug, and don't jump out that
easily when reading code.
I don't know of any outstanding errors, but putting a few "assert this
timezone is in UTC" around will hopefully reduce the chance that there are
any current or future timezone errors.
Note that none of these functions are called outside of the analytics code
(and tests). This commit also doesn't change any current behavior, assuming
a database where all datetimes have been being stored in UTC.
Previously, entering a non-UTC end time for a daily stat would give you
incorrect results. This is because:
* All daily stats are collected at and have end_times in the database in
midnight UTC.
* For daily stats, time_range returns a list of datetimes at midnight in the
timezone of its end argument. These datetimes are the only ones we look
for when looking for rows corresponding to the stat in the database.
* Previously, we passed on the end argument from the API to time_range,
without modification.
The logic to apply events to page_params['unread_msgs'] was
complicated due to the aggregated data structures that we pass
down to the client.
Now we defer the aggregation logic until after we apply the
events. This leads to some simplifications in that codepath,
as well as some performance enhancements.
The intermediate data structure has sets and dictionaries that
generally are keyed by message_id, so most message-related
updates are O(1) in nature.
Also, by waiting to compute the counts until the end, it's a
bit less messy to try to keep track of increments/decrements.
Instead, we just update the dictionaries and sets during the
event-apply phase.
This change also fixes some corner cases:
* We now respect mutes when updating counts.
* For message updates, instead of bluntly updating
the whole topic bucket, we update individual
message ids.
Unfortunately, this change doesn't seem to address the pesky
test that fails sporadically on Travis, related to mention
updates. It will change the symptom, slightly, though.
We now have two helper functions:
* get_raw_unread_data
* aggregate_unread_data
Separating the concerns is nice. The first function does
all the data collection. The second function should be fast,
and it only re-organizes the data into an aggregated form
that makes the page_params payload smaller and easier for
clients to work with.
For the first function, we try to return data structures
that are easier to manipulate than the end result. This
will allow us to apply events more easily, in a subsequent
commit.
Instead of using `unified_reactions` mapping start using
`name_to_codepoint` mapping for converting emoji name to
codepoints. We were using `unified_reactions` mapping
because prior to emoji web PR `name_to_codepoint` mapping
was generated using emoji_map.json which contained old
codepoints but for reactions new codepoints were required
to display them using sprite sheets.
Create a new custom email backend which would automatically
logs the emails that are send in the dev environment as
well as print a friendly message in console to visit /emails
for accessing all the emails that are sent in dev environment.
Since django.core.mail.backends.console.EmailBackend is no longer
userd emails would not be printed to the console anymore.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
They're rarely useful, usually displayed invisibly in most tools
anyway, and this helps make sure the message makes it into Zulip
rather than being rejected.
Postgres doesn't like them, we don't have an obvious way to escape
them, and they tend to be sent by buggy tools where it'd be better for
the user to get an error.
This fixes a 500 we were getting occasionally.
We have two different concepts of "idle", and this function
is based on the "presence" aspect of idleness. There is also
idleness in terms of a user having no current client
descriptors accepting messages, and we check that later in
the process for things like sending missed message emails.
check_send_stream_message is a simpler version of
check_send_message for sending messages where the addressee is
a stream. Instead of relying on Addressee.legacy_build,
check_send_stream_message uses Addressee.for_stream. Consequently,
it eschews many of check_send_message's kwargs that aren't needed
when the intended recipient of a message is a stream.
This isn't something that a user can ever modify, so it doesn't belong
in DEFAULT_SETTINGS. While we're at it, we align the appearance of
the email gateway in the docs with whether this setting in the docs
will be valid.
This commit switches to use sprite sheets for rendering emojis
in all the remaining places, i.e., message bodies and composebox
typeahead. This commit also includes some changes to notifications.py
file so that the spans used for rendering emojis can be converted
to corresponding image tags so that we don't break the emoji rendering
in missed message emails since we can't use sprite sheets there.
As part of switching the bugdown system to use sprite sheets, we need
to switch the name_to_codepoint mappings to match the new sprite
sheets. This has the side effect of fixing a bunch of emoji like
numbers and flag emoji in the emoji pickers.
Fixes: #3895.
Fixes: #3972.
These are long enough to still be self-explanatory (the only one I'm
at all in doubt about there is DEBG; I avoided "DBUG" because it reads
"BUG" which suggests a high-priority message, and those are the
opposite of that), while saving a good bit of horizontal space
vs. padding everything to the 8 characters of "CRITICAL".
Also add a linter exception to allow easy-to-read alignment here,
similar to several existing exceptions for other alignment cases.
This also gives us a place to hang the originating module, if we write a bit
of logic to work that out; sadly it doesn't come out of the box, only
the filename (which is likely to have a bunch of noise that just shows the
path to the deployment or virtualenv.)
This doesn't yet do much, but it gives us a suitable place to
add code to customize how log messages are displayed, beyond what
a format string passed to the default formatter can do.
Having Addressee take care of setting stream_name to
sender.default_sending_stream.name makes us able to have
the invariant that stream_name is never None when the
message type is 'stream', which will help for mypy, among
other things.
One thing to be aware of is that Addressee does do a little
bit of validation work, and this adds yet another JsonableError
exception. I don't view this as a bad thing, just something to
know.
This is just enough of a quick fix to work with a stock Zulip 1.6
server. We should really also make this robust to arbitrary input
from the remote Zulip server, even though it'll be a little tedious.
The dictionary result for get_user_info_for_message_updates()
now has a `mention_user_ids` field that is a set of user ids
who were mentioned in a message.
There are several reasons to extract this function:
* It's easy to unit test without extensive mocking.
* It will show up when we profile code.
* It is something that you can mostly ignore for
most messages.
The main reason to extract this, though, is that we are about
to do some fairly complex splicing of data for the use case
of mentioning service bots on streams they are not subscribed to,
and we want to localize the complexity.
It's unlikely to be of any real consequence, but this code bugged me
in that it makes a whole set before throwing it away to make nearly
the same set.
Sadly Python's comprehensions lack a way to write these cleanly as one
comprehension; but with no extra code complexity we can make the
temporary a genexp, which does the job.
This fixes a bug where the internal_prep_message code path would
incorrectly ignore the `realm` that was passed into it. As a result,
attempts to send messages using the system bots with this code path
would crash.
As a sidenote, we really need to make our test system consistent with
production in terms of whether the user's realm is the same as the
system realm.
We don't access any attributes of the sender other than the realm, and
as it turns out, we in some cases want to use a different realm than
the sender's.
Previously, this accessed realm.uri via trying to use
zulip_default_context. That doesn't make any sense, because
zulip_default_context expects an HttpRequest object, and those are
nowhere in sight in the code path. We do, however, have the outgoing
webhook bot user involved in the event, and that's the object to
access realm.uri from here.
This commit implements support for rendering static files in
under static/generated/bots/ in the same manner as we render
our webhooks/integration documentation. Said static files are
generated by tools/setup/generate_zulip_bots_static_files.py
during provisioning.
Previously, invitation reminder emails were only being cleared after a
successful signup if newsletter_data was available, since that was the
circumstance in which we were calling the relevant queue processor
code. Now, we (1) clear them when a human user finishes signing up
and (2) correctly clear them using the 'address' field of
ScheduleEmail, not user_id.
We don't need full Realm objects to find DefaultStream
objects for a realm. So now a few functions related to
adding/removing default streams use realm_id for lookups.
Similarly, we don't need a full Stream object to find
out if a stream exists in DefaultStream, so we do id
lookups there as well.
This sets us up to use thinner objects in callers.
We want to convert stream names to stream ids as close
to the "edges" of our system as possible, so we let our
caller do the work of finding the stream id for a stream
narrow.
We now have a dedicated cache for active_user_ids() that only
stores a list of user_ids.
Before this commit, active_user_ids() used a cache of UserProfile
dictionaries, so it incurred unnecessary deserialization costs for
all the user fields that it sliced away in a list comprehension.
Because the cache is skinnier here, we also need to invalidate it
less frequently. Basically, all we care about is new users, realm
deactivations, and user deactivations.
It's hard to measure how much this will improve performance, because
the speedup for any operation here is pretty minor, but we use this
function a lot, so hopefully it will make the overall system more
healthy.
This is mostly a preparatory commit for an upcoming optimization
related to stream data, but it probably does save us an
occasional DB hop to the realm table.
Previously, this was its own separate test script; now it's a normal
part of the test suite.
Tweaked by tabbott to use a proper test method.
Fixes#6327.
This leads to more than a 2x speedup when tested with
20k+ total subscribers. (For large realms with lots of default
streams, this function deals with LOTS of data, so it is important
to optimize.)
This class encapsulates the mapping of stream ids to
recipient ids, and it is optimized for bulk use and
repeated use (i.e. it remembers values it already fetched).
This particular commit barely improves the performance
of gather_subscriptions_helper, but it sets us up for
further optimizations.
Long term, we may try to denormalize stream_id on to the
Subscriber table or otherwise modify the database so we
don't have to jump through hoops to do this kind of mapping.
This commit will help enable those changes, because we
isolate the mapping to this one new class.
Moves SEND_ALL to inside get_next_hotspots, since it is not something other
files should call.
Also changes the delay to 0s, and gates the code behind an
`if settings.DEVELOPMENT`.
We were mostly excluding inactive users before this fix, but
now we completely ignore them.
This potentially changes some of the data we return from
get_recipient_info(), but the extra user ids before this fix
were effectively ignored by the caller.
The prior code would queue up feedback messages even if the
feedback bot was deactivated, which was just due to oversight
most likely. (People probably rarely disable the feedback bot,
but they should have that option.)
We now triage message content for possible mentions before
going to the cache/DB to get name info. This will create an
extra data hop for messages with mentions, but it will save
a fairly expensive cache lookup for most messages. (This will
be especially helpful for large realms.)
[Note that we need a subsequent commit to actually make the speedup
happen here, since avatars also cause us to look up all users in
the realm.]
This sets us up a subsequent commit where we need more data
from the Subscription table to build recipient info, so the
function boundary doesn't work any more for get_recipient_info,
which is part of the heavily optimized send-message
path.
We used to share code here with typing notifications, but
typing notifications need a lot less data than the
send-message path, so it's useful to decouple these two
things. The idioms that are duplicated here are pretty simple
one-liners.
compilemessages command now does all the heavy lifting by creating a
language_name_map.json file under locale directory. This file is used
by get_language_list to retrieve the require information.
Fixes: #6486
This commit makes get_recipient_info() faster by never creating
Django ORM objects. We use the ORM to create a values query
instead, and then we iterate over the rows to create various
collections of ids.
In order to avoid lots of code duplication, this commit unifies
how we query UserProfile for PMs and streams. Prior to this
commit we were getting "wide" UserProfile objects out of
our memcached cache. Now we just go to the database with our
list of userids. The new approach at worst adds one hop to the
database for PMs, which aren't really a performance bottleneck
(compared to streams). And the new approach actually saves a
hop when both partners aren't in cache (plus we don't pay the
penalty of hitting the cache itself).
The performance improvement here is easy to measure for messages
to streams with many users, even with all the other activity
that goes on inside do_send_messages(). I took test_performance()
in test_messages.py, set num_extra_users to 3000, and consistently
measured a ~20% speedup in do_send_messages().
This commit also eliminates fetching of emails. We probably
could have done that in a prior commit, but in this commit it
is very explicit that we don't need it. While removing email
from the query is a no-brainer, it actually had a negigible
impact on performance. Almost all the savings here comes from
not create UserProfile objects.
This function returns a summary of recipient data for a message
that's being sent. It's mostly just moving code into the
old function called get_recipient_user_profiles().
This commit is necessary to prevent bringing back emails from the
DB for all N recipients of a message just to see if the feedback
bot is being invoked.
We calculate `service_bot_tuples` earlier in the function, so that
we don't need "full" UserProfile objects later in the function.
This is part of consolidating code that basically just needs to
triage user_ids.
This starts to phase out the need for UserProfile objects in
do_send_messages(). UserProfile objects are expensive to create
for large streams with lots of users. The objects in the code
before this commit aren't even full UserProfile objects.
This change mostly sets up future performance improvements, but
we also get a minor speedup here when we run a test with 3000
stream subscribers.
There is no reason for either render_incoming_message() or
render_markdown() to require full UserProfile objects just to
triage alert words.
By only asking for user_ids, we save extra queries in two
callpaths and we make it easier to start using user_ids in
do_send_messages().
This function is essentially a copy of get_recipient_user_profiles,
which is about to go away. The new function enforces the contract of
typing indicators, which is that they don't apply to streams, which
allows us to use a relatively simple approach for getting user
profile objects.
We are diverging this code, because the send-message path needs
more optimizations.
This change introduces an extra hop to the database, but it is
generally faster due to nuances of the DB and the ORM. It
also sets us up to optimize get_recipient_user_profiles() by
avoiding creating ORM objects.
I measured the impact of this using a stream with 3000
subscribers, half of whom were idle, and it speeds things up
by 10%.
The commit() call in fix() breaks migrations and tests (unless you
mock) due to outer transactions.
We now explicitly call commit() from the management command.
Usually a small minority of users are eligible to receive missed
message emails or mobile notifications.
We now filter users first before hitting UserPresence to find idle
users. We also simply check for the existence of recent activity
rather than borrowing the more complicated data structures that we
use for the buddy list.
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.