QTBUG-3467 prevents non-normal-face @font-face fonts from being used when
defined as such in CSS. To work around this, the desktop applications now
ship the Humbug font themselves, and this commit causes the server to no
longer send the problematic CSS rules to those clients.
We have some duplication insofar as we now have two minified CSS files, but
this is better than conditionally applying the CSS at page runtime.
(imported from commit 9a887f9fb8002d44171d366d1249ebbf21cc9c77)
Showing a user's bots in alphabetical order leads to a mildly
confusing experience when we append a new bot to the end of the
table, but then you hit refresh, and the new bot goes to a different
position. Since any given user is unlikely to have zillions of
bots, I don't think we need alphabetical order to help them find
old bots.
(imported from commit 4f19dbd7a016e7d867e88248190849dcd52c6d71)
Since in the future we might want requests to add subscriptions to
include things like colors, in_home_view, etc., we're changing the
data format for the add_subscriptions API call to pass each stream as
a dictionary, giving a convenient place to put any added options.
The manual step required here is updating the API version in AFS
available for use with the zephyr_mirror.py system.
(imported from commit 364960cca582a0658f0d334668822045c001b92c)
Previously, it always failed because we had hooked up the API endpoint
to a function that doesn't exist.
(imported from commit b5269f6d8e385facae4362742fe69a422f6315b7)
This way we can return properties of the streams other than just their
names in future versions of the API without breaking old clients.
The manual step required is to deploy the updated version of
sync-public-streams on zmirror.humbughq.com when we deploy this code
to prod.
(imported from commit 42b86d8daa5729f52c9961dd912c5776a25ab0b4)
I believe this should require no special work on deploy, since some
grepping of logs suggests we are not currently using this API query.
(imported from commit 240086f900c6680cbc90bf6a2f334a9e1f172df6)
Whereas `SITE_ROOT` referred to the directory where settings.py is
located, *all* actual uses of `SITE_ROOT` were joining it with `..` to
get the root of the git checkout, a much more useful value.
`DEPLOY_ROOT` now represents the root of the git checkout.
(imported from commit 351437f9a5801e5c7c08a3a97619e863144e5cc8)
This improves the performance of the get_old_messages queries when
loading the home page by about a factor of 2.5x.
(imported from commit 581840a6ed7b391c9d9fde67f368fce816e567e2)
This is in preparation for having a case in which we query the
database directly to get the message flags, without going through a
UserMessage object.
(imported from commit d5218974680b0c4b028a84f3aae1c8242ceb08ce)
memcached stores objects sent to it using pickling, which is very
slow. We work around this by sending memcached strings (i.e. JSON
dumps); pickling doesn't slow things down too much if all it is
getting is a string.
(imported from commit 0f0e534182eccb76c5731198e05a9324a1cef316)
This saves something like 15ms on our 1000 message get_old_messages
queries, and will save even more when we start sending JSON dumps into
our memcached system.
We need to install python-ujson on servers and dev instances before
pushing this to prod.
(imported from commit 373690b7c056d00d2299a7588a33f025104bfbca)
Wrapping render_to_response never actually worked correctly. On the
login page, mixpanel_token would be missing, but we wouldn't get an
error because it is surrounded by double quotes, which meant that it
was still valid Javascript.
(imported from commit 820ee42fab8f679983e5a3a4309a2feaf690f20f)
Show user-uploaded avatars on the website for users who have
UserProfile.avatar_source == 'U'. (Continue to show gravatars
for other users.) This includes the home page, the visible-phone
div, and the settings page.
This fix does NOT address a few things:
* There is no GUI to actually upload user images yet on the website.
* The !gravatar syntax in bugdown will continue to show gravatar images
only.
* We are not changing identicon behavior.
(imported from commit 9f5ac0bbe21ba56528048233aab2430e4dd431aa)
Treat "mentioned" messages like "starred" messages for narrowing.
Lots of ugly copy/paste here. There might be opportunity for
some cleanup in places.
(imported from commit e7629890d42643c0000e1cc85422b2a0690f2cc4)
Messages that get sent out when someone subscribes many people to a new stream each
cause individual database queries (and their associated transactions). With the patched
bulk_create (which sets the .id on created objects), we can reduce this query down to a constant
number of queries on the Message and UserMessage tables.
Note for deployment (local dev, staging and prod):
you must be running a patched django, found here: https://github.com/acrefoot/django/branches
use this branch: acrefoot-bulk_create_with_id-1.5.1
on acrefoot-bulk_create_with_id-1.5.1
relevant sha1: ac6d885b811f7e2e34f0db0da217983f7dfd357f
(imported from commit b0dab9dac784d3ff47751e65bf22c2dddc22edf5)
The only known outstanding bug with this is that it doesn't properly
handle the updating of a message's highlighting/presence in a narrowed
view (e.g. in theory, a message should disappear if it is edited such
that its subject doesn't match your narrow or it no longer matches
your search). I think I'll just open a trac ticket about that once
this is merged, since it's a little hairy to deal with and kinda a
marginal use case.
Also it's not pretty, but that should be easy to tweak once we get the
framework merged.
Conflicts:
tools/jslint/check-all.js
(imported from commit 2d0e3a440bcd885546bd8e28aff97bf379649950)
Currently the interface for editing messages is limited to a
command-line API tool; it's great for testing with e.g.:
./api/examples/edit-message --message=348135 --content="test $(date +%s)" --site=http://localhost:9991 --subject="test"
The next commit will add a user interface for actually doing the editing.
(imported from commit bdd408cec2946f31c2292e44f724f96ed5938791)
For beanstalk we need to provide a decorator that converts %40 to @ in the
http basic auth part of the URL. However, if we put our own wrapper around
rest_dispatch, the Django CSRF protection jumps in. This requires us to put
@csrf_exempt on our extra dispatch function, at which point we might as well
have avoided rest_dispatch in the first place and put a @csrf_exempt decorator
on our api_beanstalk_webhook.
(imported from commit b1f459aad26a5b80cce93f6c859240a53c11cc22)
This, combined with acrefoot's work on sending the notifications in
bulk, resolves trac #1142 -- we do only 10 database queries and the
whole operation completes in about 300ms on my laptop.
(imported from commit 36b5bb836bc6c713903d1ca72e39af87775dc469)
This does the simple thing, which will work as long as the phrase
doesn't have any punctuation and contains the exact text being
searched for. We really want phrase search (check if the lexemes in
the query occur in sequence in the input lexeme string), but Postgres
doesn't support that natively.
(imported from commit 67bf36883ed21743fcad3f02ad5b319ab188f816)
THE CENSUS OF HUMBUG
Now it happened that at this time Waseem Daher issued a decree that
a census should be made of the whole users of Humbug.
This census -- the first -- took place while Faraone was governor of
MailChimp, and everyone went to be registered, each to his own realm.
So Alice Humbugger set out from the town of MIT for MailChimp in order to
be registered.
(imported from commit fca7714ebffd0b39b9b1337058f67975985f4039)
Previously a default was missing from this function, which resulted in
users being unable to change their settings if they tried to disable
sounds.
(imported from commit 2dae67dcb2e8cb986abb6dee9659be2192993dd9)
It's strictly more functional, and having a single arguments
extraction decorator makes our codebase less confusing.
(imported from commit 2a5618c04b486268a462a24a1481ac030f15eac4)
This decouples from Chrome notifications, which gives us cross-platform
support in at least modern browsers.
We log this action so its replayable in our message logs.
This implements the model change indicated by the previous schema commit.
(imported from commit b21213cdde54f43670bbb0bf1f607147fc732b38)
Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In the case where we're getting old messages for a narrowed view, the
anchor message id might not actually be in the result set so there's
no reason to fetch an extra message.
(imported from commit e610d1f2cb95be3ff9fce6dc95e40c560bc5bf84)
This allows users on signup-eligible domains to sign up for Humbug using
Google Apps.
As part of this, we wrap the openid done view in our own code in order to
handle the "Unknown user" error. Therein, we create a PreregistrationUser
and then shunt the user through the rest of the confirmation process, pre-
filling in their name.
(imported from commit 066d9a1021384a6da2662352e62a701451bd6f44)
Having a message ID range significantly improves the query
performance because the number of messages Postgres has to consider
is much smaller.
(imported from commit 9b007457712f1c1502d526abea1b6fd742bd911d)
On my laptop, this saves about 80 milliseconds per 1000 messages
requested via get_old_messages queries. Since we only have one
memcached process and it does not run with special priority, this
might have significant impact on load during server restarts.
(imported from commit 06ad13f32f4a6d87a0664c96297ef9843f410ac5)
See PEP 328[1] for details. This feature was introduced in Python 2.5 and
will become mandatory in Python 3.
[1]: http://www.python.org/dev/peps/pep-0328
(imported from commit 7444eeba8a08d5f91b94c7921848f2274979bd76)
Amazingly, this saves about 250ms on every get_old_messages query in
my testing on postgres.humbughq.com (previously, we were scanning all
rows in the zephyr_usermessage table rather than using an index).
(imported from commit 566a5ef0bbf3c2198fa9e0b63d34e38ac9c57d18)
This works by rather than hardcoding e.g. "message__recipient",
using (prefix + "recipient") where prefix is either "message__" or "".
(imported from commit 3a27d6499bc869d6dd389b074cb7d7cf286760aa)
This commit will incorrectly list past-online users as active, a shortcoming that is
addressed in the next commit
(imported from commit b018767df686f88c0ca939c067c573e4d7cea357)
Boto usually handles this for us, but can't do autodetection like it
normally would because the file path we tell Boto isn't the original name
of the file.
(imported from commit 1ad4b04baf39be8887c86f7238438580651874ff)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)