This would have made reactivations hard, and doesn't really buy us much
additional security.
During deactivation, all a user's current sessions are deactivated and
they are marked as not active. This prevents them from logging in via
the web UI, and makes their API key unusable.
Randomizing their password is probably gratuitious, especially as we
start to allow authorized end-users to deactivate others.
(imported from commit c63d23816da0452a1df821f2fa6c1db2761733da)
Prior to this commit, populate_db would crash if you had ever deactivated
a user in your development instance's message log.
(imported from commit 227b2c0226a46ef5680443d3dbf62a13ce961e64)
The JS tests would fail on the second run due to memcache having
dirty data. This change sets a new KEY_PREFIX whenever you launch
a server in test mode.
(imported from commit 4d41e6b79ab3bb7cb4c96b37050f0b1c9abc6b5e)
* This makes bugdown.convert take a `message` parameter. Properties
for parsed mentions are added to the message object by the `Pattern`
for use in do_send_messages.
* Refactor repeated markdown rendering code into `Message` model methods.
(imported from commit 4f0ed5570104c0210f984b6de21e9048e2b53fa0)
This uses a new configuration that enables memcache, but we have
to be careful to bounce KEY_PREFIX on every new test, since data
gets rolled back in the databases between tests, but not in
memcached. We had to break up one test to work around UserProfile
objects actually being cached.
(imported from commit f201cf9cd9e0e4c61d3c384fa8d2bbd5134161e8)
After fixing the high numbers of database queries earlier in this
branch, I found that sending 500 RabbitMQ messages for a bulk change
in subscriptions was consuming more than half the time for these (and
then we'd end up with 500 events in a queue). To handle this, we
create a "user X subscribed to these N streams" event, rather than
sending one event for each individual subscription.
(imported from commit 44a34a9fab9b67e9f0da6fee53335d8c5030392b)
This improves the performance of unsubscribing to N streams by more
than a factor of 10 for large N.
(imported from commit a529e6d3ac4452f49c2294908d275280019bbd05)
Previously we only used bulk queries when adding many users to a
single stream, resulting in very slow performance when subscribing
users to large numbers of streams (as happens when setting up a new
MIT realm user).
(imported from commit 849fa7b2a1a146c0a9adc1c727c20c9fbfb7b425)
This comment was only ever accurate for prototype versions of
bulk_add_subscriptions prior to it being committed to master.
(imported from commit 89b9dc49423c45553cb6c810d97eea4583ff0f69)
This change removes an "if True:" that was
introduced to make the prior commit a bit more readable.
It also combines two loops, since the second loop is no
longer conditional.
(imported from commit df58f1e5de72d5669f6468fbff54fb62cd22cedb)
The tests in GetUpdatesTest had some callback logic that has
been dead code for at least three months. We now fully exercise
the callback codepath and make sure that the callbacks do happen.
(imported from commit f5d8fbab28ecc34dc81d3d0c29058b66c10f378f)
Compare two user objects by id to prevent false negatives
when the objects are fetched thru different paths.
(imported from commit a41f30d27e2b8021600d89f32d6526f48677fd95)
Trying to check whether a Django model object is inside a set of other
Django models is not correct in general, e.g.:
UserProfile.objects.only("id").get(id=17) in set([UserProfile.objects.get(id=17)])
returns False.
This bug appears twice in the function, once when computing which
users were mentioned and again when pushing the flags through to
Tornado.
(imported from commit b09ed550258f9df2611e1b0a60f87c48a51830f8)
(Before it had been disabled only on prod/staging, but we are
removing it everywhere, motivated by making tests run faster.
In particular, the call to embedly_client.is_supported() was
expensive, as it went over the Internet.)
(imported from commit ea12bf6e7ae84ce7e8023a0d314ecc4c07cbc0a8)
Whereas `SITE_ROOT` referred to the directory where settings.py is
located, *all* actual uses of `SITE_ROOT` were joining it with `..` to
get the root of the git checkout, a much more useful value.
`DEPLOY_ROOT` now represents the root of the git checkout.
(imported from commit 351437f9a5801e5c7c08a3a97619e863144e5cc8)
Previously we had an issue that every other update_active_status
request for a particular realm would result in doing the expensive
query to compute the list of active users in that realm. It turned
out this was because on every update_active_status request, we'd queue
an event that would have the effect of clearing the cache, even if
nobody's tatus changed. This fixes that issue, by only clearing the
cache for a realm if someone's status actually changed (or the 60s
timeout expires).
(imported from commit d5b829fe255a31c8cecb58458738f1e72a2cf6de)
memcached stores objects sent to it using pickling, which is very
slow. We work around this by sending memcached strings (i.e. JSON
dumps); pickling doesn't slow things down too much if all it is
getting is a string.
(imported from commit 0f0e534182eccb76c5731198e05a9324a1cef316)
This saves something like 15ms on our 1000 message get_old_messages
queries, and will save even more when we start sending JSON dumps into
our memcached system.
We need to install python-ujson on servers and dev instances before
pushing this to prod.
(imported from commit 373690b7c056d00d2299a7588a33f025104bfbca)
We had a few bugs where we were using a raw Django database query to
get a UserProfile object. This might seem OK, but going through
memcached is more efficient, and also guarantees that we get back the
.select_related() version of the object, so that if we later access
related fields like user_profile.realm.domain, we don't end up doing a
second database query as well.
Fixing these should in practice save a substantial number of database
queries on handling update_status_list requests, which happen very
often and access user_profile.realm.domain.
(imported from commit 0a2027da1b5bbc7a4f6c6927aca498530d7a4977)
The refactoring to use the cache_get() method incorrectly didn't
remove the addition of KEY_PREFIX inside cache_with_key. The result
was that the KEY_PREFIX was being added twice, once by cache_with_key
and again inside cache_get.
This had the impact of causing pointer saves to not take effect,
because our attempts to update the memcached cache when we save the
UserProfile object were using the correct cache key, but the actual
code reading values out of the caceh wasn't.
(imported from commit dcea000833f00622bdc0249488de3b186a7417b2)
Otherwise code paths that use those keys, like get_old_messages, will
incorrectly use the prefix-included keys.
This bug in our KEY_PREFIX system results in our memcached caching for
get_old_messages always missing.
(imported from commit 506c13e06d6f266596ead0b381c324c256e576c3)
Wrapping render_to_response never actually worked correctly. On the
login page, mixpanel_token would be missing, but we wouldn't get an
error because it is surrounded by double quotes, which meant that it
was still valid Javascript.
(imported from commit 820ee42fab8f679983e5a3a4309a2feaf690f20f)
Show user-uploaded avatars on the website for users who have
UserProfile.avatar_source == 'U'. (Continue to show gravatars
for other users.) This includes the home page, the visible-phone
div, and the settings page.
This fix does NOT address a few things:
* There is no GUI to actually upload user images yet on the website.
* The !gravatar syntax in bugdown will continue to show gravatar images
only.
* We are not changing identicon behavior.
(imported from commit 9f5ac0bbe21ba56528048233aab2430e4dd431aa)
The new file can't be called logging.py because then it would be
annoying to import the system logging module within it.
(imported from commit 71d116e4be98d45b09dda049a43142a82647b727)