Also use psql -e (--echo-queries) in scripts that use ‘set -x’, so
errors can be traced to a specific query from the output.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This should mean that maintaining two Zulip development environments
using the same Git checkout no longer has caching problems keeping
track of the migration status.
This reuses the work we did some time ago to avoid regenerating the
test database unnecessarily.
In addition to being a nice convenience for developers (since any
accomulated test data is still available), this also saves about half
the time consumed in a no-op provision.
Fixes#5182.
This fixes an annoying issue where one tries to rebuild the database,
and it fails due to there being existing connections.
The one thing that is potentially scary about this implementation is
that it means it's now a lot easier to accidentally drop your
production database by running the wrong script; might be worth adding
a "--force" flag controlling this behavior or something.
Thanks to Nemanja Stanarevic and Neeraj Wahi for prototypes of this
implementation! They did most of the work and testing for this.
This fixes some issues that we've had where commands will fail is
confusing ways after the database is rebuilt because data from before
the database was dropped is still in the memcached cache.
We do this by creating a new zulip{_test}_base database that only has the zulip
schema and the tsearch_extras extension. We then use that as a template when
creating zulip{_test}.
(imported from commit 8adb4b98410e4042a0187902e89c99561eac8c8f)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
South doesn't properly deal with removing the Django User model, so
this commit redoes our South history to instead start after that
migration has already been applied. This allows us to get rid of some
annoying hacks.
Note that developers and staging will need to run
./manage.py migrate --delete-ghost-migrations zephyr
in order to clear out the old versions of the migrations.
(imported from commit 7f45ea601b809dde33720f76e7dfb0ab348b0e65)
Django's South migrations support for setting up a new database
doesn't properly handle AUTH_USER_MODEL changing over time. Fix this
by having the initial migration be run with AUTH_USER_MODEL set to the
default value.
(imported from commit c373db9edc61f26527c486c741f8e870614600e3)
We accidentally lost this when we did the User/UserProfile merge (this
commit also deletes the old code to add the auth_user index in
do-destroy-rebuild-database).
This below is mostly just notes for future reference, but when
deploying this change to staging, we should consider running the
following instead of using the migration directly:
CREATE UNIQUE INDEX CONCURRENTLY zephyr_userprofile_email_uniq ON zephyr_userprofile(email);
ALTER TABLE zephyr_userprofile ADD CONSTRAINT zephyr_userprofile_email_uniq UNIQUE USING INDEX zephyr_userprofile_email_uniq;
CREATE INDEX CONCURRENTLY zephyr_userprofile_email ON zephyr_userprofile(email);
But I think it might be the case that it's fine to just run it
directly, since the ALTER TABLE part seems to hang if there's an open
transaction working on a UserProfile object anyway.
(imported from commit 1bf34ce242de51e97c91c8bab86b6b273e17fb43)
This should substantially improve the repeat-rendering time for pages
with large numbers of tweets since we don't need to go all the way to
twitter.com, which can take like a second, to render tweets properly.
To deploy this commit properly, one needs to run
./manage.py createcachetable third_party_api_results
(imported from commit 01b528e61f9dde2ee718bdec0490088907b6017e)