* This makes bugdown.convert take a `message` parameter. Properties
for parsed mentions are added to the message object by the `Pattern`
for use in do_send_messages.
* Refactor repeated markdown rendering code into `Message` model methods.
(imported from commit 4f0ed5570104c0210f984b6de21e9048e2b53fa0)
Previously we only used bulk queries when adding many users to a
single stream, resulting in very slow performance when subscribing
users to large numbers of streams (as happens when setting up a new
MIT realm user).
(imported from commit 849fa7b2a1a146c0a9adc1c727c20c9fbfb7b425)
This improves the performance of the get_old_messages queries when
loading the home page by about a factor of 2.5x.
(imported from commit 581840a6ed7b391c9d9fde67f368fce816e567e2)
This is in preparation for having a case in which we query the
database directly to get the message flags, without going through a
UserMessage object.
(imported from commit d5218974680b0c4b028a84f3aae1c8242ceb08ce)
This decreases the average size of the message dicts in memcached by
about half, without any significant change in the overall performance
of the query. Since these message dicts are a significant fraction of
what we put in memcached, this seems like a worthwhile optimization.
(imported from commit 3896328074aa4344b8ac7c7ba7685f0a167ec7ad)
memcached stores objects sent to it using pickling, which is very
slow. We work around this by sending memcached strings (i.e. JSON
dumps); pickling doesn't slow things down too much if all it is
getting is a string.
(imported from commit 0f0e534182eccb76c5731198e05a9324a1cef316)
This saves something like 15ms on our 1000 message get_old_messages
queries, and will save even more when we start sending JSON dumps into
our memcached system.
We need to install python-ujson on servers and dev instances before
pushing this to prod.
(imported from commit 373690b7c056d00d2299a7588a33f025104bfbca)
Show user-uploaded avatars on the website for users who have
UserProfile.avatar_source == 'U'. (Continue to show gravatars
for other users.) This includes the home page, the visible-phone
div, and the settings page.
This fix does NOT address a few things:
* There is no GUI to actually upload user images yet on the website.
* The !gravatar syntax in bugdown will continue to show gravatar images
only.
* We are not changing identicon behavior.
(imported from commit 9f5ac0bbe21ba56528048233aab2430e4dd431aa)
I've tried to do this in a way that's scalable and easily configured,
so that we can add new such filters for customers on-demand without
needing to add anything other than a bit of configuration.
Once we're confident in the arguments to this system, I think we'll
want to move the regular expression lists into the database so that we
don't need to do a prod push to modify the regular expression lists.
The initial set of regular expressions are:
(1) Linkifying e.g. "trac #224" in the Humbug realm, so we're exercising this code.
(2) The various ticket number things CUSTOMER7 uses for the CUSTOMER7 realm.
(imported from commit 992b0937b9012c15a7c2f585eb0aacb221c52e01)
We also record the historical edits to the message in this JSON format:
[{"prev_content": "new test message 14", "timestamp": 1369157249},
{"prev_content": "new test message 13", "timestamp": 1369157118}]
but we don't actually do anything with the information as of yet.
(imported from commit 2d5ca449b87b33ad035ab0e076a22e150c8e7267)
This decouples from Chrome notifications, which gives us cross-platform
support in at least modern browsers.
We log this action so its replayable in our message logs.
This implements the model change indicated by the previous schema commit.
(imported from commit b21213cdde54f43670bbb0bf1f607147fc732b38)
Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
On my laptop, this saves about 80 milliseconds per 1000 messages
requested via get_old_messages queries. Since we only have one
memcached process and it does not run with special priority, this
might have significant impact on load during server restarts.
(imported from commit 06ad13f32f4a6d87a0664c96297ef9843f410ac5)
It's subtle, but the slice was in the wrong place and wasn't
actually truncating the stream name at all, so the client and
server disagreed about where the tutorial messages should go.
(It might be the case that we should accept the tutorial stream
name from the client directly, rather than computing it in two
places.)
(imported from commit 8273223f182e8ad36eaea1cbf75e1426fcfdfbab)
See PEP 328[1] for details. This feature was introduced in Python 2.5 and
will become mandatory in Python 3.
[1]: http://www.python.org/dev/peps/pep-0328
(imported from commit 7444eeba8a08d5f91b94c7921848f2274979bd76)
This commit will incorrectly list past-online users as active, a shortcoming that is
addressed in the next commit
(imported from commit b018767df686f88c0ca939c067c573e4d7cea357)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
We accidentally lost this when we did the User/UserProfile merge (this
commit also deletes the old code to add the auth_user index in
do-destroy-rebuild-database).
This below is mostly just notes for future reference, but when
deploying this change to staging, we should consider running the
following instead of using the migration directly:
CREATE UNIQUE INDEX CONCURRENTLY zephyr_userprofile_email_uniq ON zephyr_userprofile(email);
ALTER TABLE zephyr_userprofile ADD CONSTRAINT zephyr_userprofile_email_uniq UNIQUE USING INDEX zephyr_userprofile_email_uniq;
CREATE INDEX CONCURRENTLY zephyr_userprofile_email ON zephyr_userprofile(email);
But I think it might be the case that it's fine to just run it
directly, since the ALTER TABLE part seems to hang if there's an open
transaction working on a UserProfile object anyway.
(imported from commit 1bf34ce242de51e97c91c8bab86b6b273e17fb43)
This is preparatory for removing the StreamColor model, so we also set
things up so anything changing the StreamColor model changes the
Subscription model too.
The manual task is to run the copy_colors.py management command after
deployment to each of staging and prod.
(imported from commit 1be7523ca59f5266eb2c4dc2009e31209ed49635)
I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
And keep the fields updated, by copying on UserProfile creation and
updating the UserProfile object whenever we're updating the User
object, and add management commands to (1) initially ensure that they
match and (2) check that they still match (aka that the updating code
is working).
The copy_user_to_userprofile migration needs to be run after this is
deployed to prod.
(imported from commit 0a598d2e10b1a7a2f5c67dd5140ea4bb8e1ec0b8)
Our testing code had a number of places where it was using User
objects where it should have been using UserProfile objects --
e.g. using a User id as the type_id in a Recipient table. This commit
addresses this in the filter_by_subscriptions code paths.
(imported from commit e305bc8e2a8bdbfd04c93c59d56955e7971552af)
The previous situation was bad for two reasons:
(1) It had a lot of copies of the code, some of them missing pieces:
UserProfile.objects.get(user__email__iexact=foo)
This was in particular going to be inconvenient since we are dropping
the __user part of that.
(2) It didn't take advantage of our memcached caching.
(imported from commit 2325795f288a7cf306cdae191f5d3080aac0651a)