This has a small bug where we don't actually filter the message out of
the home view; fixing that requires adding an index on the "flags"
field of UserMessage.
(imported from commit 492c99d0a8e87b253e577be6564bec12099bd8e9)
There seems to be some sort of bug involving PhantomJS and XHR
streaming messages. When successive pages are loaded that use XHR
streaming, PhantomJS seems to think the second one never finishes
loading and therefore hangs.
(imported from commit db93b4cab816f1fdc3f3f543c9394b1cba8abedb)
We really should be setting a variable in Javascript to indicate that
we've finished loading, but this hasn't bitten us yet.
(imported from commit ee1f7c76d9f3c482561cc5c44b81537c7e9636be)
Because our authentication system reads cookies from the initial
connection attempt, several SockJS transports can't be used.
(imported from commit 34b9571225d39072985b8223fb12c43c7235841f)
New dependency: sockjs-tornado
One known limitation is that we don't clean up sessions for
non-websockets transports. This is a bug in Tornado so I'm going to
look at upgrading us to the latest version:
https://github.com/mrjoes/sockjs-tornado/issues/47
(imported from commit 31cdb7596dd5ee094ab006c31757db17dca8899b)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
I am about to change the behavior of the internal API, and it's really more
important to have test coverage on the external API anyway.
(imported from commit 8a0723cbcb4ac1819a63397584aa40e69ceb827d)
The Mirror and iPhone tabs were either unused or misleading
for realm-specific pages of the /activity report.
(imported from commit 8d0a99eac6657fbfd9e6a32f22739eed66e03fbf)
Arguably the nl2br extension should be doing this for us. Given that
we're using nl2br, the "two spaces at the end of a line makes a line
break" rule doesn't make any sense (since every newline leads to a
linebreak), so we disable it.
(imported from commit 5ffa2ac8a825642ad31e085c532091e076665710)
This fixes the following two closely related tabs:
Integrations by domain
Integrations by client
They now blacklist clients instead of whitelisting them, so
we can see newcomers like Hubot and Giphy bot. Our naming
convention still leaves a lot to be desired.
(imported from commit 66cbd07160d93e4b745a1439261330d854700a5c)
There were a couple of bugs in the security checks that resulted in
IRC mirroring of stream messages not working.
(imported from commit 31ac732461a733c1c993f77356053d4f88c67177)
Previously we were having the database do the matching on sender email
address, which resulted in an unnecessary join.
(imported from commit 70bf791a00b7d5965ef977e45b4a0eccbd3402a0)
I believe with this change the log lines will fit much better into
Zulip, and the Client string was I suspect rarely important for
responding to slow queries (and is always available in the main log
anyway).
(imported from commit ad56f446bf3fb96a14a56b825f46c1dad9b6babe)
Summary blocks can contain hundreds of messages. When the rendering window
code didn't take this into account, it would lead to all kinds of
unpleasant behavior when you scroll.
Trac #1888
Unfortunately, this replaces a subtraction with a function that iterates
through all the messages.
(imported from commit 9259a246946cd968a8725c38ff5ef2d4b4793717)
The queued email gets deleted if the user signs up before it gets sent.
Otherwise, they are reminded in 2 days that they still haven't signed up.
This addressses Trac #1812
(imported from commit c1bdc09c03ac576b08986e56994de72d52fd293b)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_stream_name_idx ON zerver_stream ((upper(name)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
This significantly improves the uncached performance of get_stream()
(e.g. from 32ms to 9ms). At present, this codepath is not used
particularly heavily since we do cache the stream names and do most of
our filtering by recipient ID, but the index isn't expensive and does
provide a significant improvement in the uncached case.
(imported from commit 4d28dc2e9a02d0602861b165393d90ed18f5f4c8)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_subject_idx ON zerver_message ((upper(subject)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
Apparently our existing indexes on subject/topic weren't being used in
our narrowing queries, because we do case-insensitive search.
This substantially improves our database performance around
stream+topic narrows. See before and after query plans below from my
test instance.
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.952..32.958 rows=2 loops=1)
-> Sort (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.946..32.947 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=237.99..13509.51 rows=41 width=4) (actual time=2.357..32.912 rows=2 loops=1)
Recheck Cond: (recipient_id = 38)
Filter: ((id <= 348495) AND (upper((subject)::text) = 'TEST'::text))
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=1.178..1.178 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 33.049 ms
(10 rows)
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=435.11..435.22 rows=41 width=4) (actual time=4.998..4.999 rows=2 loops=1)
-> Sort (cost=435.11..435.22 rows=41 width=4) (actual time=4.997..4.997 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=275.63..434.02 rows=41 width=4) (actual time=4.981..4.984 rows=2 loops=1)
Recheck Cond: ((upper((subject)::text) = 'TEST'::text) AND (recipient_id = 38))
Filter: (id <= 348495)
-> BitmapAnd (cost=275.63..275.63 rows=41 width=0) (actual time=4.954..4.954 rows=0 loops=1)
-> Bitmap Index Scan on upper_subject_idx (cost=0.00..37.38 rows=1744 width=0) (actual time=2.972..2.972 rows=27457 loops=1)
Index Cond: (upper((subject)::text) = 'TEST'::text)
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=0.855..0.855 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 5.049 ms
(13 rows)
(imported from commit 1f4815ccb0691053ff8d505149482dbc74153fb3)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
By far the common case for get_old_messages is the home view loading
queries, for which we have raw queries. This patch substantially
improves those queries using the observation that we weren't actually
using the zerver_message table that we were joining with.
I actually expect this to result in a noticable performance
improvement for loading of the homepage.
(imported from commit 12807e5a74eb63275b2523a5f62fd901ab632f0f)
Deployment instructions: I think all the queue workers get
restarted automatically, so there is probably nothing special
to do here in the deploy itself, but we will want to monitor
it closely, and the change should make our number of locks go
down.
QueueProcessingWorker.start() now calls consume_and_commit(),
which ensures that we don't hold locks after work actions
by using Django's commit_on_success() decorator.
Obviously, workers that override start() will not call consume_and_commit()
through this code path. SlowQueryWorker calls commit_on_success()
in its start() method now, and I hope to address MissedMessageWorker soon.
(imported from commit f3f38a7f45730eee8f3b5794371ba5b994017676)