This includes:
* Merging the pull request and issue subject creation functions
* Factoring out pull request and issue content creation
(imported from commit 9cf90e999482a1998431e6483788522101607167)
It makes bootstrap look up the href as a DOM element, which caused
a browser error because the path is not a CSS selector.
(imported from commit 196a5983ae6a31716b14ae577239fe7ab1416226)
Some bots created by us do not have owners. Don't try to send a
message to the nonexistent owner.
(imported from commit ab952eccd7d6c4728e9477a106142214b5c81ca9)
We were previously using the true count of unread message for HTML
emails but the number of conversations for plaintext. Make them
consistent.
(imported from commit efb140abcc95faf00631c03580f518ca4d8ef58c)
Instead just rely on the 2-minute delay in the management command to
batch conversations.
We've had people report being confused or thinking the feature was
broken when they didn't get e-mails because of our rate-limiting, so
let's see if this is not too overwhelming.
(imported from commit 706ddb07b906b5c2edea1159c04acc2ee6f06e29)
Warn inside these functions when you get data on streams that you
are not subscribed to:
add_subscriber
remove_subscriber
user_is_subscribed
The back end should be smart enough not to spam us with subscriber
info that we don't care about.
(imported from commit b27644be2abc37c11ddff884ef392ea208bd1bd3)
Don't send peer_add notifications to users who are already
getting add notifications, because they will already know
about subscribers.
(imported from commit 726b54ae0e30b71440b17d9c51b026872ea96218)
It only grabs the user_profile_id column now. This leads to a
speedup of about 16x between grabbing large ORM objects vs.
small 1-column dictionaries.
(imported from commit 95150bff3fdcbe250b04f014062224af42a6644f)
Splitting out notify_peers() will give us flexibility for cleaning
up how we notify peers for bulk adds.
(imported from commit e108fa2c432cc1fe54d788c58c82c983e0f2394e)
If you expand subscribers on your settings page, you will now see
a query like this in your postgres logs:
SELECT "zerver_userprofile"."email"
FROM "zerver_subscription" INNER JOIN "zerver_recipient" ON ("zerver_subscription"."recipient_id" = "zerver_recipient"."id") INNER JOIN "zerver_userprofile" ON ("zerver_subscription"."user_profile_id" = "zerver_userprofile"."id") WHERE ("zerver_recipient"."type" = 2 AND "zerver_subscription"."active" = true AND "zerver_recipient"."type_id" = 40 AND "zerver_userprofile"."is_active" = true )
The join's still complicated, but the list of fields is one instead of 40+.
(imported from commit 48de1f888193a4d23fcea52d0b633d134e4a3ff7)
get_subscribers_backend() now calls the new get_subscriber_emails()
function, which just queries the email field:
"zerver_userprofile"."email"
...instead of querying about 40 fields that it never uses.
I was able to verify the query slimming by watching my postgres server log.
Also, you can verify that the ORM does roughly 16x less work using values():
>>> def f(): return [sub.user_profile.email for sub in list(Subscription.objects.all().select_related())]
...
>>> def g(): return [row['user_profile__email'] for row in list(Subscription.objects.all().values('user_profile__email'))]
...
>>> def timeit(func): t = time.time(); func(); return time.time() - t
...
>>> timeit(f)
0.045198917388916016
>>> timeit(g)
0.002752065658569336
(imported from commit a69f690a96d076b323fdfc2f4821b0548bdfac7f)
LinkPattern returned a string which contained a placeholder if the URL was
considered invalid. AtomicLinkPattern wrapped this in an AtomicString,
where the placeholder doesn't get removed properly.
m.group(0) is always incorrect because python-markdown modifies your regex
to include more than you specified (this is why part of the message got
duplicated).
(imported from commit 576bdf09c2b677cf4bc56484c363eb05f2110158)