This fixes user-visible browser errors caused by trying to use the id
of messages in an empty message list.
One error could be triggered by trying to go to the end of your feed
with the End key during a reload.
Another could be triggered by trying to narrow to a stream or subject
using hotkeys while in an empty narrow.
(imported from commit a0e5456fd3b475aecac6eddd7104772baaf3aeb8)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
I noticed that on chrome, calling narrow.deactivate() actually ended
up calling itself recursively due to the hashchange code not correctly
handling the fact that in Chrome if you set
window.location.hash = '#';
and then read out the value, you get '' back out.
(imported from commit 9b5047fbe0e2ac1846e5325d066c72306634c523)
What was happening is that if you un-narrowed immediately after
receiving a message (e.g. because you just sent it), the autoscroll
animation from the zfilt table would still be running after you return
to the home view, resulting in the viewport being scrolled to an
apparently random point in the home view (even though the pointer was
still in the right place).
This cancels the autoscroll animations whenever you do one of:
(1) hashchange (e.g. to go to the settings page)
(2) select a message (covers narrowing/unnarrowing as well as keyboard hotkeys)
(3) mousewheel scroll
since those are basically the cases where we set the viewport
scrolltop directly.
Arguably this should instead be something where we somehow detect
which scroll events are triggered by what and cancel for any scroll
event not from the animation or rererendering, but that seems hard.
(imported from commit f776021303404c87b36241c733b3d1bcb083163b)
The previous code for adding users to default streams wouldn't do so
if the user didn't have a PreregistrationUser row.
(imported from commit 25f1383f6771319542d07660b29d891368889212)
Now that our plugin is in the Jenkins marketplace thing,
we don't need to have the user laboriously download it
from us and upload it themselves.
(imported from commit 25e9926f7f2314db8f3ea6c00c40514b6fd546c3)
For our primary measures of user engagement, messages sent by bots can
confuse the picture (e.g. a realm could be dead, but not appear to be,
because they didn't bother uninstalling their github and jenkins
hooks). So it's best to leave those out of our main stats.
(imported from commit 4d0f0e6442093daab164d0ed016fff1d1aa906c7)
When testing locally this bar sort of lies, because the actual bottleneck
is Django→S3.
In prod, our connection to S3 will supposudly be really fast so this won't
matter.
(imported from commit c9f4b4882cbfdf3bbb8180f1500f35d8481c1f39)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)
This creates a new management command, subscribe_new_users, which should be
run as a daemon process. When new users are created, an event is passed to
RabbitMQ including the following data:
* Email
* Full name
* IP address of the person who confirmed registration
* Time of registration confirmation
MailChimp strongly encourages the collection of the last two to enable
responses to abuse requests, and providing more data lowers the chance that
we could get banned from their service if complaints do occur.
To use this commit, you need to install the "postmonkey" module from
PyPI.
(imported from commit 20c628c3fa8bb985aaead85a80ad3b38bf94b9dc)
Apparently it no longer coalesces adjacent blank lines in a code block (which
seems like an improvement). The new test case doesn't have adjacent blank
lines and will work on old and new versions alike (tested on staging).
(imported from commit e49902be041cf1e7d6fbe489685b966cf4eae108)
Django's South migrations support for setting up a new database
doesn't properly handle AUTH_USER_MODEL changing over time. Fix this
by having the initial migration be run with AUTH_USER_MODEL set to the
default value.
(imported from commit c373db9edc61f26527c486c741f8e870614600e3)
We accidentally lost this when we did the User/UserProfile merge (this
commit also deletes the old code to add the auth_user index in
do-destroy-rebuild-database).
This below is mostly just notes for future reference, but when
deploying this change to staging, we should consider running the
following instead of using the migration directly:
CREATE UNIQUE INDEX CONCURRENTLY zephyr_userprofile_email_uniq ON zephyr_userprofile(email);
ALTER TABLE zephyr_userprofile ADD CONSTRAINT zephyr_userprofile_email_uniq UNIQUE USING INDEX zephyr_userprofile_email_uniq;
CREATE INDEX CONCURRENTLY zephyr_userprofile_email ON zephyr_userprofile(email);
But I think it might be the case that it's fine to just run it
directly, since the ALTER TABLE part seems to hang if there's an open
transaction working on a UserProfile object anyway.
(imported from commit 1bf34ce242de51e97c91c8bab86b6b273e17fb43)
This is preparatory for removing the StreamColor model, so we also set
things up so anything changing the StreamColor model changes the
Subscription model too.
The manual task is to run the copy_colors.py management command after
deployment to each of staging and prod.
(imported from commit 1be7523ca59f5266eb2c4dc2009e31209ed49635)