89a2765553 didn't include the database
migration corresponding to the change, which means it didn't take full
effect when it was merged.
I noticed this because `manage.py makemigrations` would generate these
migrations; that suggests a good idea for a test to add.
This fixes a performance issue looking up UserProfile objects for
realms with a large number of users in the case that a UserProfile
object is not in the cache.
Thanks to @dbiollo for the suggestion!
These are the result of either the upgrade to Django 1.8 itself
(username max length increased to 254), or the changes needed for
Django 1.8 compatibility.
(imported from commit 6b1d7e73c85e9a2f7de9e5b91d851977eb4959e8)
This commit loses some indexes, unique constraints etc. that were
manually added by the old migrations. I plan to add them to a new
migration in a subsequent commit.
(imported from commit 4bcbf06080a7ad94788ac368385eac34b54623ce)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Apply this commit after hours!
To apply this commit, first run the migration and then run the following as the
zulip user on staging:
$ echo 'VACUUM zerver_message' | python manage.py dbshell
The above VACUUM is needed to clean out the existing fast update pending list.
It might take a long time and block new message inserts!
See discussion near Zulip message 18377486 for why we're turning off the fast
update mechanism for zephyr_message_search_tsvector.
The high level overview is:
As a consequence of the high work_mem setting on our postgres server, the
fastupdate pending list for zephyr_message_search_tsvector can grow very large.
This leads to the occasional INSERT or UPDATE taking inordinately long (many
minutes) as the pending list is flushed, blocking other inserts.
One other possible solution for preventing the list from growing too large is to
set the autovacuum storage parameters on the table such that the autovacuum
process will run after a reasonable number of INSERTs or UPDATEs. However, the
table is mostly INSERT-only. Therefore, only the autovacuum_analyze_*
parameters will actually do anything to affect when the autovacuumer will run,
but when it does, it will do a VACUUM ANALYZE instead of a plain VACUUM. We
don't particularly need the table to be re-analyzed that often.
Turning off fast update will eventually cause the index to become less
efficient, but we can always rebuild it later if we notice it starting to get
too slow.
(imported from commit f280c193c3bc0a3f312960510c5a7dcf97f30c3d)
Allow bot owners to set which streams their will receive events for
without needing to change a configuration file.
(imported from commit 2b69e519dbc12ffbdba072031a7f7196c9e50e33)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
This migration will do nothing on staging/prod since the indices already exist.
It is only for creating the indices in dev.
(imported from commit ac26a23641191ba73fbccc2eebc4a261ece6c624)
We will need to run these commands manually when deploying to staging:
CREATE INDEX CONCURRENTLY "zerver_message_has_attachment" ON "zerver_message" ("has_attachment");
CREATE INDEX CONCURRENTLY "zerver_message_has_image" ON "zerver_message" ("has_image");
CREATE INDEX CONCURRENTLY "zerver_message_has_link" ON "zerver_message" ("has_link");
(imported from commit 84808dc6b1af887ddf784cb8a875ae462f4df985)
From a user's perpsective, the stream has been deleted. From the
database's perspective, the stream has been deactivated -- the stream
messages still exist.
(imported from commit b08b30b2a822663e17d64182af1fb160c2193344)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
This is for the CUSTOMER28 folks, so that they can turn Zulip into a more "chat client" thing.
(imported from commit 373a8afae4998fce5560e7b2bd13804c8fbb39fc)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
ScheduledJobs with type Email displace the usual mandrill codepaths
in the Zulip Enterprise deploys
* Email-specific helper functions will appear in deliver_email.py
* 0058_auto__add_scheduledjob.py
(imported from commit 8db08d8a279600322acfdbed792dc1a676f7a0ab)
This was a precursor to UserMessage.flags.read that never got used
because we decided to use django-bitfield.
(imported from commit 868754723c07ee9b85ae951aee785e571ccfef97)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_stream_name_idx ON zerver_stream ((upper(name)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
This significantly improves the uncached performance of get_stream()
(e.g. from 32ms to 9ms). At present, this codepath is not used
particularly heavily since we do cache the stream names and do most of
our filtering by recipient ID, but the index isn't expensive and does
provide a significant improvement in the uncached case.
(imported from commit 4d28dc2e9a02d0602861b165393d90ed18f5f4c8)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_subject_idx ON zerver_message ((upper(subject)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
Apparently our existing indexes on subject/topic weren't being used in
our narrowing queries, because we do case-insensitive search.
This substantially improves our database performance around
stream+topic narrows. See before and after query plans below from my
test instance.
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.952..32.958 rows=2 loops=1)
-> Sort (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.946..32.947 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=237.99..13509.51 rows=41 width=4) (actual time=2.357..32.912 rows=2 loops=1)
Recheck Cond: (recipient_id = 38)
Filter: ((id <= 348495) AND (upper((subject)::text) = 'TEST'::text))
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=1.178..1.178 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 33.049 ms
(10 rows)
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=435.11..435.22 rows=41 width=4) (actual time=4.998..4.999 rows=2 loops=1)
-> Sort (cost=435.11..435.22 rows=41 width=4) (actual time=4.997..4.997 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=275.63..434.02 rows=41 width=4) (actual time=4.981..4.984 rows=2 loops=1)
Recheck Cond: ((upper((subject)::text) = 'TEST'::text) AND (recipient_id = 38))
Filter: (id <= 348495)
-> BitmapAnd (cost=275.63..275.63 rows=41 width=0) (actual time=4.954..4.954 rows=0 loops=1)
-> Bitmap Index Scan on upper_subject_idx (cost=0.00..37.38 rows=1744 width=0) (actual time=2.972..2.972 rows=27457 loops=1)
Index Cond: (upper((subject)::text) = 'TEST'::text)
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=0.855..0.855 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 5.049 ms
(13 rows)
(imported from commit 1f4815ccb0691053ff8d505149482dbc74153fb3)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)