This is intended to be used logging out users during our deployment of
the UserProfile merge, but it could be useful for other things too.
(imported from commit bfe896d854f997f7a4d06e5bc0f19ec5b1aa5e69)
Previously, we weren't clearing the users out of memcached (we just
killed them in the database), so in fact users were not logged out
when we deactivated them for an hour (when the memcached caches would
expire).
(imported from commit 0f0a2f70e003c184106c73b22b876f57c1ef3371)
And keep the fields updated, by copying on UserProfile creation and
updating the UserProfile object whenever we're updating the User
object, and add management commands to (1) initially ensure that they
match and (2) check that they still match (aka that the updating code
is working).
The copy_user_to_userprofile migration needs to be run after this is
deployed to prod.
(imported from commit 0a598d2e10b1a7a2f5c67dd5140ea4bb8e1ec0b8)
We were incorrectly using User objects, rather than UserProfile
objects, for fetching Recipient objects for generated messages.
(imported from commit c3dfe52f4e0a68400e22ca49293b5bf2d6986402)
This way we're not directly manipulating user.password() in random
management commands.
(imported from commit e6e32ae422015ab55184d5d8111148793a8aca36)
The previous situation was bad for two reasons:
(1) It had a lot of copies of the code, some of them missing pieces:
UserProfile.objects.get(user__email__iexact=foo)
This was in particular going to be inconvenient since we are dropping
the __user part of that.
(2) It didn't take advantage of our memcached caching.
(imported from commit 2325795f288a7cf306cdae191f5d3080aac0651a)
Only a few of them took a User as an argument anyway.
This is preparatory work for merging the User and UserProfile models.
(imported from commit 65b2bd2453597531bcf135ccf24d2a4615cd0d2a)
The new nginx configuration file needs to be copied to
/etc/nginx/humbug-include and nginx needs to be restarted when this
commit is deployed.
(imported from commit 6c43f3c2c7a6acee6a852c672c96a38bda01dd0d)
This version has several limitations that are addressed in later
commits in this series.
(imported from commit 5d452b312d4204935059c4d602af0b9a8be1a009)
When we added rabbitmq usage within Tornado, we inadvertently caused
the Tornado ioloop to be initialized in runtornado.py's imports,
before we overwrote the _poll method. The end result was that we
weren't running the our instrumented Tornado poll function.
Fix this by moving that code to its own file which we import at the
top of runtornado.py, and adding comments documenting the situation so
we don't break this in some future import reorganization.
(imported from commit 016717476f10566fef4ed2b656f29f865d2084db)
This saves 2 database queries per user in the huddle when sending the
first message to a particular huddle.
(imported from commit f71aa32df846fb4b82651a93ff9608087ffcaa5a)
Also improve display of times passed -- we now use display short times
in milliseconds for easier reading.
(imported from commit 08e1e7e6acbef48453080864946f7602a3395e7c)
Previous we had around 4 copies of the logic for deciding whether we
should publish data via a SimpleQueueClient queue, a
TornadoQueueClient queue, or to directly handle the operation, which
resulted in their getting out of sync and buggy (see e.g. the previous
commit).
We need to add a lock around adding things to the queue to work around
a bug with pika's BlockingConnection.
I should note that the previous logic in some places had a bunch of
tests of the form "elif settings.TEST_SUITE" for doing the work that
would have been done by the queue processor directly; these should
have just been "else" clauses -- since we generally want that code to
run on development environments whether or not the test suite is
currently running.
(imported from commit 16bdbed4fff04b1bda6fde3b16bee7359917720b)
They are more meaningful this way -- the fact that your bots that
never log in or inactive users don't have colored streams shouldn't
impact that statistic.
(imported from commit b39debda338cbbad06957bc969b42862a888026a)
But discard any changes the Django response middleware may have made
to anything other than the content.
This allows us to, for example, output our nice database query logging
for get_updates requests.
(imported from commit e1d2fd38ceb4d73ff50bdfaad7c72ddb24d0fe16)
This is preparatory for running the Django response middleware on
our Tornado responses.
(imported from commit 05da8ea9cb663a928b2f98a928f3992aae4f067c)
This will mainly be useful in the event system branch, where we want
to actually send a response from a file other than tornadoviews.py.
(imported from commit b7ae9bb9b062215ab44eb5f0a3a72d6baeee1d07)
Since we flush memcached when we do a server restart, the flurry of
get_updates requests that fly in afterwards are all cache misses for
getting the User/UserProfile objects, so Tornado ends up spending
around 70ms per get_updates request rather than the usual 1-2ms.
So this should substantially improve our Tornado performance around
server restarts.
(imported from commit 07b8126bdfd4ff14e4c3362f9eda1fe5fd571c5b)
Our previous code could in theory end up clearing the caches it had
just filled, if Tornado's cache filling work happened to be faster
than the memcached flush.
(imported from commit 48174aadad398fb7a7c917a1df765c1261b12a55)
This is required because our migration is going to go in two phases.
When we do the database migration (on pushing to master), we update
all messages at that point. But prod doesn't know about the new
flags field, so any new messages sent on prod will not have the
read bit set.
When we push to prod, we want to re-run the bit of the migration script
that automatically sets read flags on messages older than the users's
pointer.
(imported from commit 961d33e972eac9ada80089bf1b1269c7fb42d56b)