Arguably the nl2br extension should be doing this for us. Given that
we're using nl2br, the "two spaces at the end of a line makes a line
break" rule doesn't make any sense (since every newline leads to a
linebreak), so we disable it.
(imported from commit 5ffa2ac8a825642ad31e085c532091e076665710)
This fixes the following two closely related tabs:
Integrations by domain
Integrations by client
They now blacklist clients instead of whitelisting them, so
we can see newcomers like Hubot and Giphy bot. Our naming
convention still leaves a lot to be desired.
(imported from commit 66cbd07160d93e4b745a1439261330d854700a5c)
There were a couple of bugs in the security checks that resulted in
IRC mirroring of stream messages not working.
(imported from commit 31ac732461a733c1c993f77356053d4f88c67177)
Use the new count_full_messages_between instead of subtraction in
message_list_view.append. By finding a count higher than it should be
when summarized messages are present, it didn't add new messages until
the pointer moved under certain conditions.
(imported from commit c10d9c1a0d23891acce88bf8d79866c08cb75681)
Previously we were having the database do the matching on sender email
address, which resulted in an unnecessary join.
(imported from commit 70bf791a00b7d5965ef977e45b4a0eccbd3402a0)
Currently, code blocks end up with scrollbars annoyingly frequently --
even with a maximum width window, you can't fit a standard 80
character terminal worth without needing a scrollbar. This change
causes our code block text to be the same size as normal text and
inline code blocks.
(imported from commit c2fc7e008cc514e90387f8f0db2b49e357cf4f62)
I believe with this change the log lines will fit much better into
Zulip, and the Client string was I suspect rarely important for
responding to slow queries (and is always available in the main log
anyway).
(imported from commit ad56f446bf3fb96a14a56b825f46c1dad9b6babe)
Summary blocks can contain hundreds of messages. When the rendering window
code didn't take this into account, it would lead to all kinds of
unpleasant behavior when you scroll.
Trac #1888
Unfortunately, this replaces a subtraction with a function that iterates
through all the messages.
(imported from commit 9259a246946cd968a8725c38ff5ef2d4b4793717)
* Disable for search-like narrows (whitelist stream and home instead of
blacklisting topics and PMs)
* Use home view summarization flag for All Messages
(imported from commit 48bd10ae5da7c7564c2efe86a40078f1a7e96e20)
The queued email gets deleted if the user signs up before it gets sent.
Otherwise, they are reminded in 2 days that they still haven't signed up.
This addressses Trac #1812
(imported from commit c1bdc09c03ac576b08986e56994de72d52fd293b)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_stream_name_idx ON zerver_stream ((upper(name)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
This significantly improves the uncached performance of get_stream()
(e.g. from 32ms to 9ms). At present, this codepath is not used
particularly heavily since we do cache the stream names and do most of
our filtering by recipient ID, but the index isn't expensive and does
provide a significant improvement in the uncached case.
(imported from commit 4d28dc2e9a02d0602861b165393d90ed18f5f4c8)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_subject_idx ON zerver_message ((upper(subject)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
Apparently our existing indexes on subject/topic weren't being used in
our narrowing queries, because we do case-insensitive search.
This substantially improves our database performance around
stream+topic narrows. See before and after query plans below from my
test instance.
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.952..32.958 rows=2 loops=1)
-> Sort (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.946..32.947 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=237.99..13509.51 rows=41 width=4) (actual time=2.357..32.912 rows=2 loops=1)
Recheck Cond: (recipient_id = 38)
Filter: ((id <= 348495) AND (upper((subject)::text) = 'TEST'::text))
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=1.178..1.178 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 33.049 ms
(10 rows)
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=435.11..435.22 rows=41 width=4) (actual time=4.998..4.999 rows=2 loops=1)
-> Sort (cost=435.11..435.22 rows=41 width=4) (actual time=4.997..4.997 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=275.63..434.02 rows=41 width=4) (actual time=4.981..4.984 rows=2 loops=1)
Recheck Cond: ((upper((subject)::text) = 'TEST'::text) AND (recipient_id = 38))
Filter: (id <= 348495)
-> BitmapAnd (cost=275.63..275.63 rows=41 width=0) (actual time=4.954..4.954 rows=0 loops=1)
-> Bitmap Index Scan on upper_subject_idx (cost=0.00..37.38 rows=1744 width=0) (actual time=2.972..2.972 rows=27457 loops=1)
Index Cond: (upper((subject)::text) = 'TEST'::text)
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=0.855..0.855 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 5.049 ms
(13 rows)
(imported from commit 1f4815ccb0691053ff8d505149482dbc74153fb3)
Don't warn when @-mentioning a bot on a public stream that it does
not appear to be subscribed to. It may be receiving those messages
anyway.
(imported from commit 4a00694942a721897a01736f48033c71048e0b16)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
This doesn't address the more complicated case of someone @-mentioning
you on a muted topic, which consensus is you do want to get
information for, but we need to develop some infrastructure to present
that case to users clearly.
(imported from commit a4bc1e89c108fa8ba6eccc0a198eabf2231326ab)
By far the common case for get_old_messages is the home view loading
queries, for which we have raw queries. This patch substantially
improves those queries using the observation that we weren't actually
using the zerver_message table that we were joining with.
I actually expect this to result in a noticable performance
improvement for loading of the homepage.
(imported from commit 12807e5a74eb63275b2523a5f62fd901ab632f0f)
Deployment instructions: I think all the queue workers get
restarted automatically, so there is probably nothing special
to do here in the deploy itself, but we will want to monitor
it closely, and the change should make our number of locks go
down.
QueueProcessingWorker.start() now calls consume_and_commit(),
which ensures that we don't hold locks after work actions
by using Django's commit_on_success() decorator.
Obviously, workers that override start() will not call consume_and_commit()
through this code path. SlowQueryWorker calls commit_on_success()
in its start() method now, and I hope to address MissedMessageWorker soon.
(imported from commit f3f38a7f45730eee8f3b5794371ba5b994017676)