Wrapping render_to_response never actually worked correctly. On the
login page, mixpanel_token would be missing, but we wouldn't get an
error because it is surrounded by double quotes, which meant that it
was still valid Javascript.
(imported from commit 820ee42fab8f679983e5a3a4309a2feaf690f20f)
Show user-uploaded avatars on the website for users who have
UserProfile.avatar_source == 'U'. (Continue to show gravatars
for other users.) This includes the home page, the visible-phone
div, and the settings page.
This fix does NOT address a few things:
* There is no GUI to actually upload user images yet on the website.
* The !gravatar syntax in bugdown will continue to show gravatar images
only.
* We are not changing identicon behavior.
(imported from commit 9f5ac0bbe21ba56528048233aab2430e4dd431aa)
Treat "mentioned" messages like "starred" messages for narrowing.
Lots of ugly copy/paste here. There might be opportunity for
some cleanup in places.
(imported from commit e7629890d42643c0000e1cc85422b2a0690f2cc4)
Messages that get sent out when someone subscribes many people to a new stream each
cause individual database queries (and their associated transactions). With the patched
bulk_create (which sets the .id on created objects), we can reduce this query down to a constant
number of queries on the Message and UserMessage tables.
Note for deployment (local dev, staging and prod):
you must be running a patched django, found here: https://github.com/acrefoot/django/branches
use this branch: acrefoot-bulk_create_with_id-1.5.1
on acrefoot-bulk_create_with_id-1.5.1
relevant sha1: ac6d885b811f7e2e34f0db0da217983f7dfd357f
(imported from commit b0dab9dac784d3ff47751e65bf22c2dddc22edf5)
The only known outstanding bug with this is that it doesn't properly
handle the updating of a message's highlighting/presence in a narrowed
view (e.g. in theory, a message should disappear if it is edited such
that its subject doesn't match your narrow or it no longer matches
your search). I think I'll just open a trac ticket about that once
this is merged, since it's a little hairy to deal with and kinda a
marginal use case.
Also it's not pretty, but that should be easy to tweak once we get the
framework merged.
Conflicts:
tools/jslint/check-all.js
(imported from commit 2d0e3a440bcd885546bd8e28aff97bf379649950)
Currently the interface for editing messages is limited to a
command-line API tool; it's great for testing with e.g.:
./api/examples/edit-message --message=348135 --content="test $(date +%s)" --site=http://localhost:9991 --subject="test"
The next commit will add a user interface for actually doing the editing.
(imported from commit bdd408cec2946f31c2292e44f724f96ed5938791)
For beanstalk we need to provide a decorator that converts %40 to @ in the
http basic auth part of the URL. However, if we put our own wrapper around
rest_dispatch, the Django CSRF protection jumps in. This requires us to put
@csrf_exempt on our extra dispatch function, at which point we might as well
have avoided rest_dispatch in the first place and put a @csrf_exempt decorator
on our api_beanstalk_webhook.
(imported from commit b1f459aad26a5b80cce93f6c859240a53c11cc22)
This, combined with acrefoot's work on sending the notifications in
bulk, resolves trac #1142 -- we do only 10 database queries and the
whole operation completes in about 300ms on my laptop.
(imported from commit 36b5bb836bc6c713903d1ca72e39af87775dc469)
This does the simple thing, which will work as long as the phrase
doesn't have any punctuation and contains the exact text being
searched for. We really want phrase search (check if the lexemes in
the query occur in sequence in the input lexeme string), but Postgres
doesn't support that natively.
(imported from commit 67bf36883ed21743fcad3f02ad5b319ab188f816)
THE CENSUS OF HUMBUG
Now it happened that at this time Waseem Daher issued a decree that
a census should be made of the whole users of Humbug.
This census -- the first -- took place while Faraone was governor of
MailChimp, and everyone went to be registered, each to his own realm.
So Alice Humbugger set out from the town of MIT for MailChimp in order to
be registered.
(imported from commit fca7714ebffd0b39b9b1337058f67975985f4039)
Previously a default was missing from this function, which resulted in
users being unable to change their settings if they tried to disable
sounds.
(imported from commit 2dae67dcb2e8cb986abb6dee9659be2192993dd9)
It's strictly more functional, and having a single arguments
extraction decorator makes our codebase less confusing.
(imported from commit 2a5618c04b486268a462a24a1481ac030f15eac4)
This decouples from Chrome notifications, which gives us cross-platform
support in at least modern browsers.
We log this action so its replayable in our message logs.
This implements the model change indicated by the previous schema commit.
(imported from commit b21213cdde54f43670bbb0bf1f607147fc732b38)
Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In the case where we're getting old messages for a narrowed view, the
anchor message id might not actually be in the result set so there's
no reason to fetch an extra message.
(imported from commit e610d1f2cb95be3ff9fce6dc95e40c560bc5bf84)
This allows users on signup-eligible domains to sign up for Humbug using
Google Apps.
As part of this, we wrap the openid done view in our own code in order to
handle the "Unknown user" error. Therein, we create a PreregistrationUser
and then shunt the user through the rest of the confirmation process, pre-
filling in their name.
(imported from commit 066d9a1021384a6da2662352e62a701451bd6f44)
Having a message ID range significantly improves the query
performance because the number of messages Postgres has to consider
is much smaller.
(imported from commit 9b007457712f1c1502d526abea1b6fd742bd911d)
On my laptop, this saves about 80 milliseconds per 1000 messages
requested via get_old_messages queries. Since we only have one
memcached process and it does not run with special priority, this
might have significant impact on load during server restarts.
(imported from commit 06ad13f32f4a6d87a0664c96297ef9843f410ac5)
See PEP 328[1] for details. This feature was introduced in Python 2.5 and
will become mandatory in Python 3.
[1]: http://www.python.org/dev/peps/pep-0328
(imported from commit 7444eeba8a08d5f91b94c7921848f2274979bd76)
Amazingly, this saves about 250ms on every get_old_messages query in
my testing on postgres.humbughq.com (previously, we were scanning all
rows in the zephyr_usermessage table rather than using an index).
(imported from commit 566a5ef0bbf3c2198fa9e0b63d34e38ac9c57d18)
This works by rather than hardcoding e.g. "message__recipient",
using (prefix + "recipient") where prefix is either "message__" or "".
(imported from commit 3a27d6499bc869d6dd389b074cb7d7cf286760aa)
This commit will incorrectly list past-online users as active, a shortcoming that is
addressed in the next commit
(imported from commit b018767df686f88c0ca939c067c573e4d7cea357)
Boto usually handles this for us, but can't do autodetection like it
normally would because the file path we tell Boto isn't the original name
of the file.
(imported from commit 1ad4b04baf39be8887c86f7238438580651874ff)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)
This creates a new management command, subscribe_new_users, which should be
run as a daemon process. When new users are created, an event is passed to
RabbitMQ including the following data:
* Email
* Full name
* IP address of the person who confirmed registration
* Time of registration confirmation
MailChimp strongly encourages the collection of the last two to enable
responses to abuse requests, and providing more data lowers the chance that
we could get banned from their service if complaints do occur.
To use this commit, you need to install the "postmonkey" module from
PyPI.
(imported from commit 20c628c3fa8bb985aaead85a80ad3b38bf94b9dc)
Beanstalk integration uses webhooks that use http basic auth to authenticate
the sending user.
(imported from commit bd65f5b2d052a3c1eb04da64d055a3640a384892)
I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
When this is deployed to staging, we need to run
./manage.py logout_all_users --realm=humbughq.com
When this is deployed to prod, we need to run
./manage.py logout_all_users
(imported from commit d6c6ea4b1c347f3d9122742db23c7b67767a7349)