This decouples from Chrome notifications, which gives us cross-platform
support in at least modern browsers.
We log this action so its replayable in our message logs.
This implements the model change indicated by the previous schema commit.
(imported from commit b21213cdde54f43670bbb0bf1f607147fc732b38)
Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In the case where we're getting old messages for a narrowed view, the
anchor message id might not actually be in the result set so there's
no reason to fetch an extra message.
(imported from commit e610d1f2cb95be3ff9fce6dc95e40c560bc5bf84)
This allows users on signup-eligible domains to sign up for Humbug using
Google Apps.
As part of this, we wrap the openid done view in our own code in order to
handle the "Unknown user" error. Therein, we create a PreregistrationUser
and then shunt the user through the rest of the confirmation process, pre-
filling in their name.
(imported from commit 066d9a1021384a6da2662352e62a701451bd6f44)
Having a message ID range significantly improves the query
performance because the number of messages Postgres has to consider
is much smaller.
(imported from commit 9b007457712f1c1502d526abea1b6fd742bd911d)
On my laptop, this saves about 80 milliseconds per 1000 messages
requested via get_old_messages queries. Since we only have one
memcached process and it does not run with special priority, this
might have significant impact on load during server restarts.
(imported from commit 06ad13f32f4a6d87a0664c96297ef9843f410ac5)
See PEP 328[1] for details. This feature was introduced in Python 2.5 and
will become mandatory in Python 3.
[1]: http://www.python.org/dev/peps/pep-0328
(imported from commit 7444eeba8a08d5f91b94c7921848f2274979bd76)
Amazingly, this saves about 250ms on every get_old_messages query in
my testing on postgres.humbughq.com (previously, we were scanning all
rows in the zephyr_usermessage table rather than using an index).
(imported from commit 566a5ef0bbf3c2198fa9e0b63d34e38ac9c57d18)
This works by rather than hardcoding e.g. "message__recipient",
using (prefix + "recipient") where prefix is either "message__" or "".
(imported from commit 3a27d6499bc869d6dd389b074cb7d7cf286760aa)
This commit will incorrectly list past-online users as active, a shortcoming that is
addressed in the next commit
(imported from commit b018767df686f88c0ca939c067c573e4d7cea357)
Boto usually handles this for us, but can't do autodetection like it
normally would because the file path we tell Boto isn't the original name
of the file.
(imported from commit 1ad4b04baf39be8887c86f7238438580651874ff)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)
This creates a new management command, subscribe_new_users, which should be
run as a daemon process. When new users are created, an event is passed to
RabbitMQ including the following data:
* Email
* Full name
* IP address of the person who confirmed registration
* Time of registration confirmation
MailChimp strongly encourages the collection of the last two to enable
responses to abuse requests, and providing more data lowers the chance that
we could get banned from their service if complaints do occur.
To use this commit, you need to install the "postmonkey" module from
PyPI.
(imported from commit 20c628c3fa8bb985aaead85a80ad3b38bf94b9dc)