This prevents the ones with external side-effects (like sending real
email) from being accidentally run in dev instances.
(imported from commit 6d9861d721abb29136bfff974de01a9264051436)
Before we were removing items individually from the queue. We now
directly use RabbitMQ's queue purging mechanism.
(imported from commit 62ab52c724c5a221b4c81a967154a4046a579f84)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
This will allow us to redirect clients to the correct local site.
To apply this migration, just run:
python manage.py migrate zilencer 0002
(imported from commit 7bd39b5f035145b6b52e1b2cb2ad5f6720d598ce)
Here we introduce a new Django app, zilencer. The intent is to not have
this app enabled on LOCALSERVER instances, and for it to grow to include
all the functionality we want to have in our central server that isn't
relevant for local deployments.
Currently we have to modify functions in zerver/* to match; in the
future, it would be cool to have the relevant shared code broken out
into a separate library.
This commit inclues both the migration to create the models as well as a
data migration that (for non-LOCALSERVER) creates a single default
Deployment for zulip.com.
To apply this migration to your system, run:
./manage.py migrate zilencer
(imported from commit 86d5497ac120e03fa7f298a9cc08b192d5939b43)
This test has been broken for a couple months, and nobody has taken
ownership of fixing it. It's always slow, sometimes it fails
randomly, sometimes it fails for things that aren't really problems,
and it's generally been way more trouble than it's worth.
(imported from commit 8080e81b226a372e763a2558f4e5668c3a4d087c)
Use rest_dispatch for upload auth redirect so it doesn't send the
long URL to user_activity.
(imported from commit ab327bbd529412e43eee6d109f8550180544dbbb)
Trac #1734
This is implemented by bouncing uploaded file links through a view
that checks authentication and redirects to an expiring S3 URL.
This makes file uploads return a domain-relative URI. The client converts
this to an absolute URI when it's in the composebox, then back to relative
when it's submitted to the server.
We need the relative URI because the same message may be viewed across
{staging,www,zephyr}.zulip.com, which have different cookies.
(imported from commit 33acb2abaa3002325f389d5198fb20ee1b30f5fa)
As it turns out, some of these tests used message IDs 1 and 2, which
Hamlet didn't even necessarily receive as the messages to update --
which meant that they previously updated 0 messages and returned
success. So those tests started failing when I added a check for not
updating anything in the update_message_flags backend -- and this
commit fixes the tests to actually update a nonempty set of messages.
(imported from commit 9034b415d4862216a266416a8e509d987050ffd7)
This has a small bug where we don't actually filter the message out of
the home view; fixing that requires adding an index on the "flags"
field of UserMessage.
(imported from commit 492c99d0a8e87b253e577be6564bec12099bd8e9)
There seems to be some sort of bug involving PhantomJS and XHR
streaming messages. When successive pages are loaded that use XHR
streaming, PhantomJS seems to think the second one never finishes
loading and therefore hangs.
(imported from commit db93b4cab816f1fdc3f3f543c9394b1cba8abedb)
We really should be setting a variable in Javascript to indicate that
we've finished loading, but this hasn't bitten us yet.
(imported from commit ee1f7c76d9f3c482561cc5c44b81537c7e9636be)
Because our authentication system reads cookies from the initial
connection attempt, several SockJS transports can't be used.
(imported from commit 34b9571225d39072985b8223fb12c43c7235841f)
New dependency: sockjs-tornado
One known limitation is that we don't clean up sessions for
non-websockets transports. This is a bug in Tornado so I'm going to
look at upgrading us to the latest version:
https://github.com/mrjoes/sockjs-tornado/issues/47
(imported from commit 31cdb7596dd5ee094ab006c31757db17dca8899b)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
I am about to change the behavior of the internal API, and it's really more
important to have test coverage on the external API anyway.
(imported from commit 8a0723cbcb4ac1819a63397584aa40e69ceb827d)
The Mirror and iPhone tabs were either unused or misleading
for realm-specific pages of the /activity report.
(imported from commit 8d0a99eac6657fbfd9e6a32f22739eed66e03fbf)
Arguably the nl2br extension should be doing this for us. Given that
we're using nl2br, the "two spaces at the end of a line makes a line
break" rule doesn't make any sense (since every newline leads to a
linebreak), so we disable it.
(imported from commit 5ffa2ac8a825642ad31e085c532091e076665710)
This fixes the following two closely related tabs:
Integrations by domain
Integrations by client
They now blacklist clients instead of whitelisting them, so
we can see newcomers like Hubot and Giphy bot. Our naming
convention still leaves a lot to be desired.
(imported from commit 66cbd07160d93e4b745a1439261330d854700a5c)
There were a couple of bugs in the security checks that resulted in
IRC mirroring of stream messages not working.
(imported from commit 31ac732461a733c1c993f77356053d4f88c67177)
Previously we were having the database do the matching on sender email
address, which resulted in an unnecessary join.
(imported from commit 70bf791a00b7d5965ef977e45b4a0eccbd3402a0)
I believe with this change the log lines will fit much better into
Zulip, and the Client string was I suspect rarely important for
responding to slow queries (and is always available in the main log
anyway).
(imported from commit ad56f446bf3fb96a14a56b825f46c1dad9b6babe)
Summary blocks can contain hundreds of messages. When the rendering window
code didn't take this into account, it would lead to all kinds of
unpleasant behavior when you scroll.
Trac #1888
Unfortunately, this replaces a subtraction with a function that iterates
through all the messages.
(imported from commit 9259a246946cd968a8725c38ff5ef2d4b4793717)
The queued email gets deleted if the user signs up before it gets sent.
Otherwise, they are reminded in 2 days that they still haven't signed up.
This addressses Trac #1812
(imported from commit c1bdc09c03ac576b08986e56994de72d52fd293b)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_stream_name_idx ON zerver_stream ((upper(name)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
This significantly improves the uncached performance of get_stream()
(e.g. from 32ms to 9ms). At present, this codepath is not used
particularly heavily since we do cache the stream names and do most of
our filtering by recipient ID, but the index isn't expensive and does
provide a significant improvement in the uncached case.
(imported from commit 4d28dc2e9a02d0602861b165393d90ed18f5f4c8)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_subject_idx ON zerver_message ((upper(subject)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
Apparently our existing indexes on subject/topic weren't being used in
our narrowing queries, because we do case-insensitive search.
This substantially improves our database performance around
stream+topic narrows. See before and after query plans below from my
test instance.
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.952..32.958 rows=2 loops=1)
-> Sort (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.946..32.947 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=237.99..13509.51 rows=41 width=4) (actual time=2.357..32.912 rows=2 loops=1)
Recheck Cond: (recipient_id = 38)
Filter: ((id <= 348495) AND (upper((subject)::text) = 'TEST'::text))
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=1.178..1.178 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 33.049 ms
(10 rows)
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=435.11..435.22 rows=41 width=4) (actual time=4.998..4.999 rows=2 loops=1)
-> Sort (cost=435.11..435.22 rows=41 width=4) (actual time=4.997..4.997 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=275.63..434.02 rows=41 width=4) (actual time=4.981..4.984 rows=2 loops=1)
Recheck Cond: ((upper((subject)::text) = 'TEST'::text) AND (recipient_id = 38))
Filter: (id <= 348495)
-> BitmapAnd (cost=275.63..275.63 rows=41 width=0) (actual time=4.954..4.954 rows=0 loops=1)
-> Bitmap Index Scan on upper_subject_idx (cost=0.00..37.38 rows=1744 width=0) (actual time=2.972..2.972 rows=27457 loops=1)
Index Cond: (upper((subject)::text) = 'TEST'::text)
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=0.855..0.855 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 5.049 ms
(13 rows)
(imported from commit 1f4815ccb0691053ff8d505149482dbc74153fb3)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
By far the common case for get_old_messages is the home view loading
queries, for which we have raw queries. This patch substantially
improves those queries using the observation that we weren't actually
using the zerver_message table that we were joining with.
I actually expect this to result in a noticable performance
improvement for loading of the homepage.
(imported from commit 12807e5a74eb63275b2523a5f62fd901ab632f0f)
Deployment instructions: I think all the queue workers get
restarted automatically, so there is probably nothing special
to do here in the deploy itself, but we will want to monitor
it closely, and the change should make our number of locks go
down.
QueueProcessingWorker.start() now calls consume_and_commit(),
which ensures that we don't hold locks after work actions
by using Django's commit_on_success() decorator.
Obviously, workers that override start() will not call consume_and_commit()
through this code path. SlowQueryWorker calls commit_on_success()
in its start() method now, and I hope to address MissedMessageWorker soon.
(imported from commit f3f38a7f45730eee8f3b5794371ba5b994017676)
These are some queries on API usage, desktop usage, and
Android usage that would be of interest to Waseem. These
will eventually be subsumed into /activity, but some interim
data issues may make them easier to keep separate for now.
(imported from commit 697a8496cbf4447d557a3fc89f64c1c4d3e67e70)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)
Use the commit_on_success() context manager around the call
to internal_send_message() inside of SlowQueryWorker's polling
loop, so that the pending SELECT statement from
get_status_dict_by_realm() gets committed. If we don't do
this, postgres will hold locks on zerver_userprofile, and other
tables, for a long time, which can interfere with migrations.
This is an interim solution until we switch postgres's default
commit behavior. Right now the default transaction isolation
is "read committed," so SELECT statements lead to AccessShareLocks
that do no get closed until the transaction finishes.
(imported from commit f72aeffbbe71a731e327459f15bd7dbebaf9e0b8)
Trac #1162
The process_fence method replaces code blocks with placeholders, so
indexes stored before the replacement are incorrect. However, because
the closed code blocks have been replaced, we can simply search the
whole string for any remaining opening code block markers.
(imported from commit 6a9e6924840f8f3ca5175da7c52a905e27c1fabd)
I added filter() statements to do_update_message_flags().
Here is some context:
Steve Howell: Case 1, have AND clause to reduce work for DB.
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1) where (flags & 1) = 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=5.727..5.727 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=0.045..2.751 rows=382 loops=1)
Filter: ((flags & 1::bigint) = 0)
Rows Removed by Filter: 9000
Total runtime: 5.759 ms
(5 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Leo Franchi: Sounds reasonable, but I know way less than zev about DBs so I'll defer to his judgement :)
Steve Howell: Case 2, how the code works now:
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1);
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=362.075..362.075 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=0.008..6.138 rows=9382 loops=1)
Total runtime: 362.105 ms
(3 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Steve Howell: In both trials, we set it up so that only 382 of 9382 rows need to be updated. The first trial runs about 63x as fast. The second trial, if my theory is correct, is doing 24x as many writes as it needs. Both trials are reading all 9382 rows.
Steve Howell: The expense of the update statement seems to be proportional to the number of rows you "update", not the number of rows that you actually change.
Steve Howell: For now I created #1869.
Zev Benjamin: That sounds like a reasonable explanation. The disk IO can be expensive
(imported from commit d9090daee1f81cad76c430de0956f9bd504da075)
Handled by the queue processor for signups. Added a management command
that accomplishes the same task, in case it's needed for manually added users,
or in case we goof and need to remove queued emails for a given user.
This addresses Trac #1807
(imported from commit 6727b82a07fa6a3ea3d827860c9e60fd0602297a)
We want to avoid opening a DB connection in the markdown thread
as its DB connection might live for a long time
(imported from commit 7700b2ca793ee5e9add7f071b92f22a4bf576b3d)
This will hopefully incentivize people to click one and get back into
the app.
We'll also need this for digest emails.
(imported from commit 57191c3fcca3b12df93a81e4692bb7eb8ccc83b2)
This requires renaming the account in Google Apps at the time we
deploy this; we'll probably want to do this during off hours to avoid
any user-visible downtime.
This also updates some related email addresses.
(imported from commit fce7629b359a4f278bbf7815c8d177a8fa0484fe)
This may require just doing an mv on the home directory, plus changing
the home directory in /etc/passwd. It should of course be done carefully.
(imported from commit 660997d897ee6d33563af74f0fc5d4267a911755)
A few "slow" tests aren't as slow any more, for whatever reason,
so we're setting a higher bar going forward.
(imported from commit 642137cebb7826f4512b5635da9d7b75bd5c35f4)
The text of manual links are already AtomicStrings, so linkified strings
should be too.
Moves emoji detection to happen after linkification, so the emoji rule
won't look at links.
(imported from commit 9c56bce6a0e873b398255e0762dfb312a4a9a64e)
InlinePatterns should return None on failure, not text that may
have placeholders in it.
(imported from commit f9d8d22b2b8cfa7a92ecf3e52a6c76b48e6f0175)
We want the UserActivity.query field to reflect the name of the
function for REST calls, not the URL, and we accomplish this by
setting request._query to target_function.__name__.
(imported from commit 9df05fef0dffb34483b182b95f8cbc4409083eed)
If request._query is set in the call to update_user_activity(),
we will use that instead of request.META['PATH_INFO'] for
the query field of the UserActivity row we write.
(imported from commit fcee30098e1c7c5cb4195a1e5905fc7b88af804f)
This results in some small behavior changes. First, if a user
has both malformed JSON and an invalid API key, they will now
be informed of the invalid API key, not the malformed JSON,
because the decorator wrapping code executing first. Second,
we call process_client(), which basically builds us a
UserActivity record with the client "API".
(imported from commit fadb523db9bdc82984bdae61833c5c99f1ebd1c0)
This is for webhook API endpoints that only get passed in an api_key,
not an email. An example would be api_jira_webhook, and some of
the code is borrowed from there. The rest of the code is from
authenticated_api_view().
(imported from commit b5b2a4ea52f9b317f00357ef3142c76534fabf20)
Add the number of person-minutes for the last 24 hours to the
realm report on the main tab of /activity.
(imported from commit 2ff46eacc4c8276ab0407fc6ff9f28f5137f1ed2)
When decoding an operand, a + can be converted to a space
only if the operand is not an email address.
(imported from commit 08fc36a579bbe6409137c60c0fa9579fe3ab2c43)
This tab shows how long each user has been on during the last 24
hours, using data from UserActivityInterval. Much of the code
is borrowed from analyze_user_activity.py, but in this version
we set the time interval to be the last 24 hours and sort by
realm and email. I also ensure that it only executes one
query to get all the data (and there's test coverage for that).
(imported from commit 7a2b80f52679054b03c5f5f42b2cda07d5599432)
Waseem is ok with removing the client-specific tabs on the
main /activity page. This reduces the number of queries from
25 to 1. We might eventually restore some of that logic, but
we will do it more efficiently. A lot of the data for
non-website clients is kind of unreliable, anyway.
The page looks kind of funny with only one tab, but that
will be fixed in the next commit.
(imported from commit 54f08f89d5242ad3e045d8ca0d97b86617c15380)
When we don't already have old messages in cache, we need to
fetch data from the database and create dictionaries for the
cache. This commit makes that process work in 50ms, instead
of 130ms, for the data set in test_bulk_message_fetching(),
which is 602 records. Before this commit we had about 132
microseconds of unnecessary churn per message, because we
were fetching DB fields we didn't need and incurring the cost
of the Django ORM. Now we use values() to get only the columns
we need, and we take advantage of previous commits that make
our code less OO and more function-driven, so we can pass the
values directly to build_message_dict() without having to create
objects.
A couple caveats on this commit:
1) I haven't been able to get good measurements on the overall
effect on get_old_messages_backend(). If you kill the cache to
force DB queries, you introduce noise related to sessions and
user profiles.
2) Look at the long comment in this commit related to
re-rendering messages in this codepath. The problem precedes
this commit.
(imported from commit dcb64aa9416f0e9583355ddd6dc3adfa746b9fc7)
Only call a function on the message object in the unfortunate
situation that we are rendering new content in to_dict_uncached().
Long term, it would be nice if this function didn't have side
effects, and we had a better strategy for upgrading rendered
content when bugdown versions change.
(imported from commit 2a323f52af37a6d651c171cb8234fbfa3d25d561)
This function doesn't require the whole UserProfile object to
create the avatar url, and we call it from Message.to_dict_uncached().
(imported from commit e814caab101c4fedd1ba66df041a3408014e4085)
For a bunch of self-dot references, move them to the top. (This
is kind of funny out of context, but it sets us up for future
refactorings.)
(imported from commit 4ebc1c44a633d86772df1828c51180707769c3dc)
If this line of code were ever called, it would crash anyway,
because it would be an unknown type, and Recipient.type_name()
would raise a KeyError.
(imported from commit db38c5f71fb2f0b044a832eb88e53fceb0d8a9cf)
This is a variation of get_display_recipient that takes
values instead of an object, so that it is decoupled from
the Django object system.
(imported from commit 25bed43ecd62f1fe0176d517b7003e7f4c78bc37)
If it's ok for the tests to use memcached, it should be ok
for them to use the in-process cache too.
(imported from commit be43879c3c48f3780317fd5b4139b44d4a1f0ed3)
This is a harmless extraction designed to allow subclasses override
the behavior of how rendered content gets saved.
(imported from commit 9df4ed9f86c857897fcb5f2b6781bfc5a0813766)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
There is a scenario where we call process_read_message()
for a message that we haven't recorded as unread before.
I'm not sure how it happens, but I put back code to
guard against crashing. The regression happened in
5752458c821.
(imported from commit 5ce15d2e236b738b445ed88f1733aa0612be0ff3)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
Update get_counts() so that it ignores counts for muted topics
when calculating stream/home unread counts.
(imported from commit 9b4e4da4346c225c535e97d709d3dee032603cc5)
The indirection was more confusing than helpful, especially
since the function had side effects, despite its getter-like
name.
(imported from commit 85d9cf642b4177f62488136f0e0f7f6c9304942e)
Empirically, we only get these for malformed emails where the charset
specified in a message part header does not match the true encoding of
the part. I checked what the resulting Zulip looked like for the
original offender, and it looked find with ignoring errors.
(imported from commit ac6ba65b611cb22d4ec547b75a585abce6fc50b0)