Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
The MitUser model caused a constant series of little problems for
users with mit.edu email addresses trying to sign up for different
Zulip servers.
The new implementation just uses conditionals on the realm object when
selecting the confirmation template to use.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
Squash the AlterField on UserProfile.groups in 0002 into the
AddField in 0001. This is done to avoid a probable bug in Django,
where running migrations in python 3 sometimes led to a KeyError.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
89a2765553 didn't include the database
migration corresponding to the change, which means it didn't take full
effect when it was merged.
I noticed this because `manage.py makemigrations` would generate these
migrations; that suggests a good idea for a test to add.
This fixes a performance issue looking up UserProfile objects for
realms with a large number of users in the case that a UserProfile
object is not in the cache.
Thanks to @dbiollo for the suggestion!
These are the result of either the upgrade to Django 1.8 itself
(username max length increased to 254), or the changes needed for
Django 1.8 compatibility.
(imported from commit 6b1d7e73c85e9a2f7de9e5b91d851977eb4959e8)
This commit loses some indexes, unique constraints etc. that were
manually added by the old migrations. I plan to add them to a new
migration in a subsequent commit.
(imported from commit 4bcbf06080a7ad94788ac368385eac34b54623ce)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Apply this commit after hours!
To apply this commit, first run the migration and then run the following as the
zulip user on staging:
$ echo 'VACUUM zerver_message' | python manage.py dbshell
The above VACUUM is needed to clean out the existing fast update pending list.
It might take a long time and block new message inserts!
See discussion near Zulip message 18377486 for why we're turning off the fast
update mechanism for zephyr_message_search_tsvector.
The high level overview is:
As a consequence of the high work_mem setting on our postgres server, the
fastupdate pending list for zephyr_message_search_tsvector can grow very large.
This leads to the occasional INSERT or UPDATE taking inordinately long (many
minutes) as the pending list is flushed, blocking other inserts.
One other possible solution for preventing the list from growing too large is to
set the autovacuum storage parameters on the table such that the autovacuum
process will run after a reasonable number of INSERTs or UPDATEs. However, the
table is mostly INSERT-only. Therefore, only the autovacuum_analyze_*
parameters will actually do anything to affect when the autovacuumer will run,
but when it does, it will do a VACUUM ANALYZE instead of a plain VACUUM. We
don't particularly need the table to be re-analyzed that often.
Turning off fast update will eventually cause the index to become less
efficient, but we can always rebuild it later if we notice it starting to get
too slow.
(imported from commit f280c193c3bc0a3f312960510c5a7dcf97f30c3d)
Allow bot owners to set which streams their will receive events for
without needing to change a configuration file.
(imported from commit 2b69e519dbc12ffbdba072031a7f7196c9e50e33)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
This migration will do nothing on staging/prod since the indices already exist.
It is only for creating the indices in dev.
(imported from commit ac26a23641191ba73fbccc2eebc4a261ece6c624)
We will need to run these commands manually when deploying to staging:
CREATE INDEX CONCURRENTLY "zerver_message_has_attachment" ON "zerver_message" ("has_attachment");
CREATE INDEX CONCURRENTLY "zerver_message_has_image" ON "zerver_message" ("has_image");
CREATE INDEX CONCURRENTLY "zerver_message_has_link" ON "zerver_message" ("has_link");
(imported from commit 84808dc6b1af887ddf784cb8a875ae462f4df985)
From a user's perpsective, the stream has been deleted. From the
database's perspective, the stream has been deactivated -- the stream
messages still exist.
(imported from commit b08b30b2a822663e17d64182af1fb160c2193344)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
This is for the CUSTOMER28 folks, so that they can turn Zulip into a more "chat client" thing.
(imported from commit 373a8afae4998fce5560e7b2bd13804c8fbb39fc)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
ScheduledJobs with type Email displace the usual mandrill codepaths
in the Zulip Enterprise deploys
* Email-specific helper functions will appear in deliver_email.py
* 0058_auto__add_scheduledjob.py
(imported from commit 8db08d8a279600322acfdbed792dc1a676f7a0ab)
This was a precursor to UserMessage.flags.read that never got used
because we decided to use django-bitfield.
(imported from commit 868754723c07ee9b85ae951aee785e571ccfef97)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_stream_name_idx ON zerver_stream ((upper(name)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
This significantly improves the uncached performance of get_stream()
(e.g. from 32ms to 9ms). At present, this codepath is not used
particularly heavily since we do cache the stream names and do most of
our filtering by recipient ID, but the index isn't expensive and does
provide a significant improvement in the uncached case.
(imported from commit 4d28dc2e9a02d0602861b165393d90ed18f5f4c8)
We need to run the schema migration manually using
"CREATE INDEX CONCURRENTLY upper_subject_idx ON zerver_message ((upper(subject)));"
since we need CONCURRENTLY and I seem to recall that doesn't work with South.
Apparently our existing indexes on subject/topic weren't being used in
our narrowing queries, because we do case-insensitive search.
This substantially improves our database performance around
stream+topic narrows. See before and after query plans below from my
test instance.
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.952..32.958 rows=2 loops=1)
-> Sort (cost=13510.61..13510.71 rows=41 width=4) (actual time=32.946..32.947 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=237.99..13509.51 rows=41 width=4) (actual time=2.357..32.912 rows=2 loops=1)
Recheck Cond: (recipient_id = 38)
Filter: ((id <= 348495) AND (upper((subject)::text) = 'TEST'::text))
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=1.178..1.178 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 33.049 ms
(10 rows)
humbug=# explain analyze SELECT "zerver_message"."id" FROM "zerver_message" WHERE ("zerver_message"."recipient_id" = 38 AND UPPER(zerver_message.subject) = 'TEST' AND "zerver_message"."id" <= 348495 ) ORDER BY "zerver_message"."id" DESC LIMIT 50;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=435.11..435.22 rows=41 width=4) (actual time=4.998..4.999 rows=2 loops=1)
-> Sort (cost=435.11..435.22 rows=41 width=4) (actual time=4.997..4.997 rows=2 loops=1)
Sort Key: id
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on zerver_message (cost=275.63..434.02 rows=41 width=4) (actual time=4.981..4.984 rows=2 loops=1)
Recheck Cond: ((upper((subject)::text) = 'TEST'::text) AND (recipient_id = 38))
Filter: (id <= 348495)
-> BitmapAnd (cost=275.63..275.63 rows=41 width=0) (actual time=4.954..4.954 rows=0 loops=1)
-> Bitmap Index Scan on upper_subject_idx (cost=0.00..37.38 rows=1744 width=0) (actual time=2.972..2.972 rows=27457 loops=1)
Index Cond: (upper((subject)::text) = 'TEST'::text)
-> Bitmap Index Scan on zephyr_message_recipient_id (cost=0.00..237.98 rows=8221 width=0) (actual time=0.855..0.855 rows=10354 loops=1)
Index Cond: (recipient_id = 38)
Total runtime: 5.049 ms
(13 rows)
(imported from commit 1f4815ccb0691053ff8d505149482dbc74153fb3)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)
These engagement data will be useful both for making pretty graphs of
how addicted our users are as well as for allowing us to check whether
a new deployment is actually using the product or not.
This measures "number of minutes during which each user had checked
the app within the previous 15 minutes". It should correctly not
count server-initiated reloads.
It's possible that we should use something less aggressive than
mousemove; I'm a little torn on that because you really can check the
app for new messages without doing anything active.
This is somewhat tested but there are a few outstanding issues:
* Mobile apps don't report these data. It should be as easy as having
them send in update_active_status queries with new_user_input=true.
* The semantics of this should be better documented (e.g. the
management script should print out the spec above)x.
(imported from commit ec8b2dc96b180e1951df00490707ae916887178e)
This commit CANNOT be deployed until the previous schema change
([schema] models: add an email_token field to Streams) is on prod.
Before applying this schema change, run the populate-stream-tokens
management command to generate tokens for streams that need them.
(imported from commit 7adc81c8c317ec5d59dd59ba42a4dc1a46174007)
The e-mail forwarder will use this. Set it to nullable temporarily to
accomodate existing streams; later commits will a) provide a script to
give all streams a token, and b) make the field non-null.
Realm administrators will eventually have a UI to regenerate stream
tokens.
(imported from commit a084d0a7012eb9665e4da095cbc46aa9ef354eaa)
This needs to be deployed to both staging and prod at the same
off-peak time (and the schema migration run).
At the time it is deployed, we need to make a few changes directly in
the database:
(1) UPDATE django_content_type set app_label='zerver' where app_label='zephyr';
(2) UPDATE south_migrationhistory set app_name='zerver' where app_name='zephyr';
(imported from commit eb3fd719571740189514ef0b884738cb30df1320)