Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
Almost all callers to do_create_user were trying to
create active users, except for one test. The
active=False codepath was kind of broken (things
like sending welcome messages had sort of undefined
behavior there), so instead of trying to maintain it,
we just update the one test (`test_people`) to flip the
`is_active` flag manually.
Fixes#7197
We mostly introduce these functions (as part of a big
code sweep):
send_stream_message
send_personal_message
send_huddle_message
In two cases, where we want to specifically manipulate
queue ids, we now call check_send_message directly. (The
above three functions deliberately don't support kwargs
to ensure simple code and better type safety.)
If a Zulip install at example.org got a request at an HTTP `Host`
like foo.example.org.evil.com (or even foo.example.orgevil.com),
we would accept it as subdomain foo. This isn't likely to happen
in practice because it shouldn't pass ALLOWED_HOSTS, and it's not
obvious to me that anything untoward could be done with it even
if ALLOWED_HOSTS were set wide open, but if nothing else it
multiplies the cases in analyzing this logic.
The reason we had a loose match like this, I assume, is to allow
the user to come from arbitrary ports -- especially in development.
So tighten the pattern to allow just that, and add some tests for
that behavior and a comment explaining why this complication is
needed.
The cookie mechanism only works when passing the login token to a
subdomain. URLs work across domains, which is why they're the
standard transport for SSO on the web. Switch to URLs.
Tweaked by tabbott to add a test for an expired token.
This makes the tests a little cleaner in itself, and also prepares
them to adjust with less churn when we change how
redirect_and_log_into_subdomain passes the signed token.
Most of these have more to do with authentication in general than with
registering a new account. `create_preregistration_user` could go
either way; we move it to `auth` so we can make the imports go only in
one direction.
Lets administrators view a list of open(unconfirmed) invitations and
resend or revoke a chosen invitation.
There are a few changes that we can expect for the future:
* It is currently possible to invite an email that you have already
invited, it might make sense to change this behavior.
* Resend currently sends an invite reminder instead of resending the
original invite, this is because 'custom_body' was not stored when
the first invite was sent.
Tweaked in various minor ways, primarily in the backend, by tabbott,
mostly for style consistency with the rest of the codebase.
Fixes: #1180.
Tweaked by tabbott to have the field before the invitation is
completed be called invite_as_admins, not invited_as_admins, for
readability.
Fixes#6834.
The tighter interface prevents the need to specify
Recipient.PERSONAL (which can often be inaccurate in the
huddle case, anyway), and it prevents tests from confusingly
specifying a "subject" field for PMs.
Having send_stream_message() avoids the need to supply
Recipient.STREAM as a parameter, and it also uses the more
modern name of `topic_name` for topics. Under the hood, it
avoids some annoying steps for re-formatting the recipients,
since we just have a single stream name.
When possible, we want to use direct APIs for sending
stream messages.
This changes the codepath slightly, by not using
forwarded_user_profile, but it doesn't impact the number
of queries, and it's a simple check.
We also remove a couple "subject" references here.
This change allows normal bots to get UserMessage rows when
they are mentioned on a stream, even if they are not actually
subscribed to the stream.
Fixes#7140.
We now find all (possibly) relevant service bots for a message
in the call to get_recipient_info. This allows us to eliminate
some code that would patch them after we rendered.
The get_service_bot_events() function will ignore any service
bots that weren't actually mentioned in the message (due to
backticks) or part of the active user ids.
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Adds support to add "Embedded bot" Service objects. This service
handles every embedded bot.
Extracted from "Embedded bots: Add support to add embedded bots from
UI" by Robert Honig.
Tweaked by tabbott to be disabled by default.
Every time we updated a UserProfile object, we were calling
delete_display_recipient_cache(), which churns the cache and
does an extra database hop to find subscriptions. This was
due to saying `updated_fields` instead of `update_fields`.
This made us prone to cache churn for fields like UserProfile.pointer
that are fairly volatile.
Now we use the helper function changed(). To prevent the
opposite problem, we use all the fields that could invalidate
the cache.
This test had a little bug, where we weren't actually
verifying `realm_bots` before, because we weren't using
`field` to look it up.
This commit fixes that bug and adds additional checks,
particularly for the recently added `realm_non_active_users'.
We now add `realm_non_active_users` to the result of
`do_events_register` (and thus `page_params`). It has
the same structure as `realm_users`, but it's for
non-active users. Clients need data on non-active users
when they process old messages that were sent by those
users when they were active. Clients can currently get
most of the data they need in the message events, but it
makes for ugly client code.
Fixes#4322
We make a few things cleaner for populating `realm_users`
in `do_event_register` and `apply_events`:
* We have a `raw_users` intermediate dictionary that
makes event updates O(1) and cleaner to read.
* We extract an `is_me` section for all updates that
apply to the current user.
* For `update` events, we do a more surgical copying
of fields from the event into our dict. This
prevents us from mutating fields in the event,
which was sketchy (at least in test mode). In
particular, this allowed us to remove some ugly
`del` code related to avatars.
* We introduce local vars `was_admin` and `now_admin`.
The cleanup had two test implications:
* We no longer need to normalize `realm_users`, since
`apply_events` now sees `raw_users` instead. Since
`raw_users` is a dict, there is no need to normalize
it, unlike lists with possibly random order.
* We updated the schema for avatar updates to include
the two fields that we used to hackily delete from
an event.
If an organization doesn't have the EmailAuthBackend (which allows
password auth) enabled, then our password reset form doesn't do
anything, so we should hide it in the UI.
While our recent changing to hide /register means we don't need a nice
pretty error message here, eventually we'll want to clean up the error
message.
Fixes#7047.
Historically, we'd just use the default Django version of this
function. However, since we did the big subdomains migration, it's
now the case that we have to pass in the subdomain to authenticate
(i.e. there's no longer a fallback to just looking up the user by
email).
This fixes a problem with user creation in an LDAP realm, because
previously, the user creation flow would just pass in the username and
password (after validating the subdomain).
This new test solves the problem that when we
made changes to the page-load codepath in the past,
it's been hard to identify what new code caused
more database queries. Now you can see query
counts broken out by event type.
This requires a small, harmless change to extract
an `always_want` function in `lib/events.py`.
Clients fetching messages can now specify that they are able
to compute their avatar, and if they set client_gratavar to
True in the request (w/our normal encoding scheme), then the
backend will not compute it, and the payload will be smaller.
The fix starts with get_messages_backend. The flag gets
passed down through these functions:
* MessageDict.post_process_dicts.
* MessageDict.set_sender_avatar.
We also fix up the callers for post_process_dicts to explicitly
pass in the client_gravatar path, but for now they all just hard
code the value to False.
Some bots have class names that differ from their module name,
e.g. `helloworld.py` vs. `HelloWorld`. Our tests should accept
all of these, as long as a handler class is present.
I think an hour after signup is not the right time to try to get someone to
re-engage with a product.
This also makes the day1 email clearly a transactional email both in
experiencing the product and in the eyes of various anti-spam laws, and
allows us to remove the unsubscribe link.
This modifies the realm creation form to (1) support a
realm_in_root_domain flag and (2) clearly check whether the root
domain is available inside check_subdomain_available before trying to
create a realm with it; this should avoid IntegrityErrors.
We were doing an unnecessary database query on every user registration
checking the availability of the user's subdomain, when in fact this
is only required for realm creation.
This removes the utterly unnecessary `triggers` dict (which always was
a dict with exactly one value True) in favor of a single field,
'trigger'.
Inspired by Kunal Gupta's work in #6659.
While the missedmessage_hook logic originally did a reasonably good
job of avoiding double-sending notifications, there was a corner case
it didn't handle, namely a user who had been presence-idle when a
message was sent and became also event-queue-idle as well within the
next 10 minutes. For those users, they got a notification at message
send time, and the missedmessage_hook would deliver it a second time.
We fix this by just checking the conveniently available push_notified
and email_notified variables that indicate whether the message already
had a notification triggered.
Fixes#7031.
Message.get_raw_db_rows is moved to MessageDict, since its
implementation details are highly coupled to other methods
in MessageDict.
And then sew_messages_and_reactions comes along for the
ride.
We eventually want to move Reaction.get_raw_db_rows to there
as well.
We're about to have multiple post-processing stages for building
message dictionaries. Rather than having individual "hydration"
methods remove intermediate values, we just wait until the end.
This decouples the hyrdration steps. The potentional problem
here is that we may have a field like sender_is_mirror_dummy
that isn't part of the final payload, but we need it for
calculating display recipients and avatars. We don't want to
delete it too early from the objects.
By default, Django sets up two handlers on this logger, one of them
its AdminEmailHandler. We have our own handler for sending email on
error, and we want to stick to that -- we like the format somewhat
better, and crucially we've given it some rate-limiting through
ZulipLimiter.
Since we cleaned out our logging config in e0a5e6fad, though, we've
been sending error emails through both paths. The config we'd had
before that for `django` was redundant with the config on the root --
but having *a* config there was essential for causing
`logging.config.dictConfig`, when Django passes it our LOGGING dict,
to clear out that logger's previous config. So, give it an empty
config.
Django by default configures two loggers: `django` and
`django.server`. We have our own settings for `django.server`
anyway, so this is the only one we need to add.
The stdlib `logging` and `logging.config` docs aren't 100% clear, and
while the source of `logging` is admirably straightforward the source
of `logging.config` is a little twisty, so it's not easy to become
totally confident that this has the right effect just by reading.
Fortunately we can put some of that source-diving to work in writing
a test for it.
This function is designed to replace avatar_url() and
avatar_url_from_dict() over time.
There are a few things new about it:
* We make the parameters more explicit, rather than
passing in an opaque dictionary or requiring a
UserProfile object. (A lot of our callers want
to use `values()` for efficiency sake, since we
are often doing bulk user operations.)
* We start to support the client_gravatar option.
This works around a bug in Django in handling the error case of a
client sending an inappropriate HTTP `Host:` header. Various
internal Django machinery expects to be able to casually call
`request.get_host()`, which will attempt to parse that header, so an
exception will be raised. The exception-handling machinery attempts
to catch that exception and just turn it into a 400 response... but
in a certain case, that machinery itself ends up trying to call
`request.get_host()`, and we end up with an uncaught exception that
causes a 500 response, a chain of tracebacks in the logs, and an email
to the server admins. See example below.
That `request.get_host` call comes in the midst of some CSRF-related
middleware, which doesn't even serve any function unless you have a
form in your 400 response page that you want CSRF protection for.
We use the default 400 response page, which is a 26-byte static
HTML error message. So, just send that with no further ado.
Example exception from server logs (lightly edited):
2017-10-08 09:51:50.835 ERR [django.security.DisallowedHost] Invalid HTTP_HOST header: 'example.com'. You may need to add 'example.com' to ALLOWED_HOSTS.
2017-10-08 09:51:50.835 ERR [django.request] Internal Server Error: /loginWithSetCookie
Traceback (most recent call last):
File ".../django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File ".../django/utils/deprecation.py", line 138, in __call__
response = self.process_request(request)
File ".../django/middleware/common.py", line 57, in process_request
host = request.get_host()
File ".../django/http/request.py", line 113, in get_host
raise DisallowedHost(msg)
django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: 'example.com'. You may need to add 'example.com' to ALLOWED_HOSTS.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".../django/core/handlers/exception.py", line 109, in get_exception_response
response = callback(request, **dict(param_dict, exception=exception))
File ".../django/utils/decorators.py", line 145, in _wrapped_view
result = middleware.process_view(request, view_func, args, kwargs)
File ".../django/middleware/csrf.py", line 276, in process_view
good_referer = request.get_host()
File ".../django/http/request.py", line 113, in get_host
raise DisallowedHost(msg)
django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: 'example.com'. You may need to add 'example.com' to ALLOWED_HOSTS.
This field would get overwritten with an improper value when
we looped over multiple clients, due to not making full copies
of the message dictionary. This failure would be somewhat
random depending on how clients were ordered in the loop.
The only consumers of this field were the mobile app and the
apply-events-to-unread-counts logic. Both of these will now
use `flags` instead.
The `is_mentioned` flag in message events was buggy. We now
look directly at flags.
We will kill off `is_mentioned` in a subsequent commit.
We also remove some debugging code in the test that was failing
before this fix. The test would only fail when `is_mentioned`
was wrong, which never happened when you ran a single test, and
which would happen randomly when you ran multiple tests.
Add this field to the Stream model will prevent us from having
to look at realm data for several types of stream operations, which
can be prone to either doing extra database lookups or making
our cached data bloated.
Going forward, we'll set stream.is_zephyr to True whenever the
realm's string id is "zephyr".
This removes sender names from the message cache, since
they aren't guaranteed to be valid, and they're inexpensive
to add.
This commit will make the message cache entries smaller
by removing sender___full_name and sender__short_name
fields.
Then we add in the sender fields to the message payloads
by doing a query against the unique sender ids of the
messages we are processing.
This change leads to 2 extra database hops for most of
our message-related codepaths. The reason there are 2 hops
instead of 1 is that we basically re-calculate way too
much data to get a no-markdown dictionary.
Introduce MessageDict.post_process_dicts() will allow us
the ability to do the following:
* use less memory in the cache for repeated data
* prevent cache invalidation
* format data according to different client needs
The first use of this function is pretty inconsequential, but
it sets us up for more consequential changes.
In this commit we defer the MessageDict.hydrate_recipient_info
step until after we pull data out of the cache. This impacts
cache size as follows:
* streams - negligibly bigger
* PMs/huddles - slimmer due to not needing to repeat
sender data like email/full_name
Again, the main point of this change is to start setting up
the infrastructure to do post-processing.
This is a first step to eventually slimming the message cache,
but there are still some moving parts there to be worked through.
The more immediate benefit of extracting this function is that
we can put tests on it. Also, it isolates some functionality
that may go away as our clients gets smarter.
This endpoint is about to become an API-style route and have the legacy
decorator removed from its view. The json/fetch_api_key endpoint will be
used in tests instead of it.
We now use a `.values` query to get just the fields we need
in order to fulfill '/json/users' requests.
The main benefit is that we don't do O(N) queries for bot
owners, but we also have less data on UserProfile to process.
On receiving a request for deleting a reaction, just check if such
a reaction exists or not. If it exists then just delete the reaction
otherwise send an error message that such a reaction doesn't exist.
It doesn't make sense to check whether an emoji name is valid or not.
This commit prepares us to introduce a StreamLite class. For
these tests, we don't care about the actual contents of the
Stream, just the right stream is there.
The original "quality score" was invented purely for populating
our password-strength progress bar, and isn't expressed in terms
that are particularly meaningful. For configuration and the core
accept/reject logic, it's better to use units that are readily
understood. Switch to those.
I considered using "bits of entropy", defined loosely as the log
of this number, but both the zxcvbn paper and the linked CACM
article (which I recommend!) are written in terms of the number
of guesses. And reading (most of) those two papers made me
less happy about referring to "entropy" in our terminology.
I already knew that notion was a little fuzzy if looked at
too closely, and I gained a better appreciation of how it's
contributed to confusion in discussing password policies and
to adoption of perverse policies that favor "Password1!" over
"derived unusual ravioli raft". So, "guesses" it is.
And although the log is handy for some analysis purposes
(certainly for a graph like those in the zxcvbn paper), it adds
a layer of abstraction, and I think makes it harder to think
clearly about attacks, especially in the online setting. So
just use the actual number, and if someone wants to set a
gigantic value, they will have the pleasure of seeing just
how many digits are involved.
(Thanks to @YJDave for a prototype that the code changes in this
commit are based on.)
We now return user_ids for subscribers to streams in add-stream
events. This allows us to eliminate the UserLite class for
both bulk adds and bulk removes. It also simplifies some JS
code that already wanted to use user_ids, not emails.
Fixes#6898
This test suite works by using the expected_output and new text_output
fields in the bugdown test cases to verify that each syntax is
correctly translated by this new function.
Some of these translations, like strikethrough, are kinda poor; but
this framework should make it easy to iterate on the formatting.
Fixes: #6720.
It's fairly difficult to debug tests that use
EventsRegisterTest.do_test, and when they fail on
Travis, it's particularly challengning. Now we make
the main diff less noisy, and we also include
the events that were applied.
This reverts commit ba8dc62132.
As best I can tell, the old configuration was correct for what Django
wanted. Further testing is required, but this at least brings
.tx/config to match the actual filenames; I think our Chinese
translations have been broken until now.
This endpoint is part of the old tutorial, which we've removed, and
has some security downsides as well.
This includes a minor refactoring of the tests.
Sort of a hacky hammer, but
* The original design of the analytics system mistakenly attempted to play
nicely with non-UTC datetimes.
* Timezone errors are really hard to find and debug, and don't jump out that
easily when reading code.
I don't know of any outstanding errors, but putting a few "assert this
timezone is in UTC" around will hopefully reduce the chance that there are
any current or future timezone errors.
Note that none of these functions are called outside of the analytics code
(and tests). This commit also doesn't change any current behavior, assuming
a database where all datetimes have been being stored in UTC.
Previously, entering a non-UTC end time for a daily stat would give you
incorrect results. This is because:
* All daily stats are collected at and have end_times in the database in
midnight UTC.
* For daily stats, time_range returns a list of datetimes at midnight in the
timezone of its end argument. These datetimes are the only ones we look
for when looking for rows corresponding to the stat in the database.
* Previously, we passed on the end argument from the API to time_range,
without modification.
The logic to apply events to page_params['unread_msgs'] was
complicated due to the aggregated data structures that we pass
down to the client.
Now we defer the aggregation logic until after we apply the
events. This leads to some simplifications in that codepath,
as well as some performance enhancements.
The intermediate data structure has sets and dictionaries that
generally are keyed by message_id, so most message-related
updates are O(1) in nature.
Also, by waiting to compute the counts until the end, it's a
bit less messy to try to keep track of increments/decrements.
Instead, we just update the dictionaries and sets during the
event-apply phase.
This change also fixes some corner cases:
* We now respect mutes when updating counts.
* For message updates, instead of bluntly updating
the whole topic bucket, we update individual
message ids.
Unfortunately, this change doesn't seem to address the pesky
test that fails sporadically on Travis, related to mention
updates. It will change the symptom, slightly, though.
Instead of using `unified_reactions` mapping start using
`name_to_codepoint` mapping for converting emoji name to
codepoints. We were using `unified_reactions` mapping
because prior to emoji web PR `name_to_codepoint` mapping
was generated using emoji_map.json which contained old
codepoints but for reactions new codepoints were required
to display them using sprite sheets.
Create a new custom email backend which would automatically
logs the emails that are send in the dev environment as
well as print a friendly message in console to visit /emails
for accessing all the emails that are sent in dev environment.
Since django.core.mail.backends.console.EmailBackend is no longer
userd emails would not be printed to the console anymore.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
Postgres doesn't like them, we don't have an obvious way to escape
them, and they tend to be sent by buggy tools where it'd be better for
the user to get an error.
This fixes a 500 we were getting occasionally.
This commit switches to use sprite sheets for rendering emojis
in all the remaining places, i.e., message bodies and composebox
typeahead. This commit also includes some changes to notifications.py
file so that the spans used for rendering emojis can be converted
to corresponding image tags so that we don't break the emoji rendering
in missed message emails since we can't use sprite sheets there.
As part of switching the bugdown system to use sprite sheets, we need
to switch the name_to_codepoint mappings to match the new sprite
sheets. This has the side effect of fixing a bunch of emoji like
numbers and flag emoji in the emoji pickers.
Fixes: #3895.
Fixes: #3972.
This should make it a little easier to understand our logging config
and make changes to it with confidence.
Many of these items that are now redundant used to be required when we
were setting disable_existing_loggers to True (before 500d81bf2), in
order to exempt those loggers from being cleared out. Now they're not.
One bit of test code needed a tweak to how it got its hands on the
AdminZulipHandler instance; it can do it from the list on the root
logger just as well as on the `django` logger.
Most of the paths leading through this except clause were cut in
73e8bba37 "ldap auth: Reassure django_auth_ldap". The remaining one
had no test coverage -- the case that leads to it had a narrow unit
test, but no test had the exception actually propagate here. As a
result, the clause was mistakenly cut, in commit
8d7f961a6 "LDAP: Remove now-impossible except clause.", which could
lead to an uncaught exception in production.
Restore the except clause, and add a test for it.
This is just enough of a quick fix to work with a stock Zulip 1.6
server. We should really also make this robust to arbitrary input
from the remote Zulip server, even though it'll be a little tedious.
The dictionary result for get_user_info_for_message_updates()
now has a `mention_user_ids` field that is a set of user ids
who were mentioned in a message.
This checks what arguments it passes into the enqueuing function.
Note, however, that the arguments are wrong for various cases, we'll
update the tests as we fix those bugs.
This ensures that as we expand the logic for under what circumstances
email and push notifications should be sent, we can be confident about
this code path always doing the right thing.
This makes GoogleSubdomainLoginTest consistently access subdomains the
standard way, replacing the original hacky approach it had that
predated the library.
There are several reasons to extract this function:
* It's easy to unit test without extensive mocking.
* It will show up when we profile code.
* It is something that you can mostly ignore for
most messages.
The main reason to extract this, though, is that we are about
to do some fairly complex splicing of data for the use case
of mentioning service bots on streams they are not subscribed to,
and we want to localize the complexity.
The plan is to have everything expect subdomains, so it makes sense to
move these tests to the subdomains-only test class and style.
Most of the remaining GoogleLoginTest tests are now either duplicates
or basic API-level tests where subdomains are irrelevant.
Previously, this accessed realm.uri via trying to use
zulip_default_context. That doesn't make any sense, because
zulip_default_context expects an HttpRequest object, and those are
nowhere in sight in the code path. We do, however, have the outgoing
webhook bot user involved in the event, and that's the object to
access realm.uri from here.
These arguments are only intended to be used for realm creation, and
they make the code more confusing.
We need to make a few changes after doing this, because some tests
were relying on these extra arguments causing the form to not submit
for their error handling.
We don't apply these changes to the LDAP tests, since fixing those
seems complicated.
This commit implements support for rendering static files in
under static/generated/bots/ in the same manner as we render
our webhooks/integration documentation. Said static files are
generated by tools/setup/generate_zulip_bots_static_files.py
during provisioning.
This commit implements support for copying over static files
for all bots in the zulip_bots package to
static/generated/bots/ during provisioning. This directory
isn't tracked by Git. This allows us to have access to files
stored in an arbitrary zulip_bots package directory somewhere
on the system. For now, logo.* and doc.md files are copied over.
This commit should act as a starting point for extending our
macro-based Markdown framework to our bots/API packages'
documentation and eventually rendering these static files
alongside our webhooks' documentation.
This enforces our use of a consistent style in how we access Python
modules; "from os.path import dirname" is a particularly popular
abbreviation inconsistent with our style, and so it deserves a lint
rule.
Commit message and error text tweaked by tabbott.
Fixes#6543.
Previously, invitation reminder emails were only being cleared after a
successful signup if newsletter_data was available, since that was the
circumstance in which we were calling the relevant queue processor
code. Now, we (1) clear them when a human user finishes signing up
and (2) correctly clear them using the 'address' field of
ScheduleEmail, not user_id.
We don't need full Realm objects to find DefaultStream
objects for a realm. So now a few functions related to
adding/removing default streams use realm_id for lookups.
Similarly, we don't need a full Stream object to find
out if a stream exists in DefaultStream, so we do id
lookups there as well.
This sets us up to use thinner objects in callers.
We now have a dedicated cache for active_user_ids() that only
stores a list of user_ids.
Before this commit, active_user_ids() used a cache of UserProfile
dictionaries, so it incurred unnecessary deserialization costs for
all the user fields that it sliced away in a list comprehension.
Because the cache is skinnier here, we also need to invalidate it
less frequently. Basically, all we care about is new users, realm
deactivations, and user deactivations.
It's hard to measure how much this will improve performance, because
the speedup for any operation here is pretty minor, but we use this
function a lot, so hopefully it will make the overall system more
healthy.
This is mostly a preparatory commit for an upcoming optimization
related to stream data, but it probably does save us an
occasional DB hop to the realm table.
Previously, this was its own separate test script; now it's a normal
part of the test suite.
Tweaked by tabbott to use a proper test method.
Fixes#6327.
Previously, the bot domain was calculated correctly in most
circumstances, but if you were using the root domain, it would be
e.g. ".chat.zulip.org", not "chat.zulip.org". We fix this, with
perhaps more use of setting REALMS_HAVE_SUBDOMAINS than would be ideal
if we weren't about to set that True unconditionally.
This class encapsulates the mapping of stream ids to
recipient ids, and it is optimized for bulk use and
repeated use (i.e. it remembers values it already fetched).
This particular commit barely improves the performance
of gather_subscriptions_helper, but it sets us up for
further optimizations.
Long term, we may try to denormalize stream_id on to the
Subscriber table or otherwise modify the database so we
don't have to jump through hoops to do this kind of mapping.
This commit will help enable those changes, because we
isolate the mapping to this one new class.
This commit enables user to authenticate with any attribute set in
AUTH_LDAP_USER_SEARCH given that LDAP_EMAIL_ATTR is set to an email
attributes in the ldap server. Thus email and username can be
completely unrelated.
With some tweaks by tabbott to squash in the documentation and make it
work on older servers.
We now triage message content for possible mentions before
going to the cache/DB to get name info. This will create an
extra data hop for messages with mentions, but it will save
a fairly expensive cache lookup for most messages. (This will
be especially helpful for large realms.)
[Note that we need a subsequent commit to actually make the speedup
happen here, since avatars also cause us to look up all users in
the realm.]
This commit makes get_recipient_info() faster by never creating
Django ORM objects. We use the ORM to create a values query
instead, and then we iterate over the rows to create various
collections of ids.
In order to avoid lots of code duplication, this commit unifies
how we query UserProfile for PMs and streams. Prior to this
commit we were getting "wide" UserProfile objects out of
our memcached cache. Now we just go to the database with our
list of userids. The new approach at worst adds one hop to the
database for PMs, which aren't really a performance bottleneck
(compared to streams). And the new approach actually saves a
hop when both partners aren't in cache (plus we don't pay the
penalty of hitting the cache itself).
The performance improvement here is easy to measure for messages
to streams with many users, even with all the other activity
that goes on inside do_send_messages(). I took test_performance()
in test_messages.py, set num_extra_users to 3000, and consistently
measured a ~20% speedup in do_send_messages().
This commit also eliminates fetching of emails. We probably
could have done that in a prior commit, but in this commit it
is very explicit that we don't need it. While removing email
from the query is a no-brainer, it actually had a negigible
impact on performance. Almost all the savings here comes from
not create UserProfile objects.
There is no reason for either render_incoming_message() or
render_markdown() to require full UserProfile objects just to
triage alert words.
By only asking for user_ids, we save extra queries in two
callpaths and we make it easier to start using user_ids in
do_send_messages().
The commit() call in fix() breaks migrations and tests (unless you
mock) due to outer transactions.
We now explicitly call commit() from the management command.
Usually a small minority of users are eligible to receive missed
message emails or mobile notifications.
We now filter users first before hitting UserPresence to find idle
users. We also simply check for the existence of recent activity
rather than borrowing the more complicated data structures that we
use for the buddy list.
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.
Instead of peeking directly at the DB to verify our mutes are
set correctly, we now use the library function. This prepares
us to modify the DB internals while preserving the tests.
Use this new variable to determine if the user already exists while
doing registration. While doing login through GitHub if we press
*Go back to login*, we pass email using email variable. As a result,
the login page starts showing the "User already exists error" if we
don't change the variable.
Admins need to know about private streams to delete them, even
if they are not subscribed. We send the minimal info possible
to the client to allow them to have a UI for that.