I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
When this is deployed to staging, we need to run
./manage.py logout_all_users --realm=humbughq.com
When this is deployed to prod, we need to run
./manage.py logout_all_users
(imported from commit d6c6ea4b1c347f3d9122742db23c7b67767a7349)
This is intended to be used logging out users during our deployment of
the UserProfile merge, but it could be useful for other things too.
(imported from commit bfe896d854f997f7a4d06e5bc0f19ec5b1aa5e69)
Previously, we weren't clearing the users out of memcached (we just
killed them in the database), so in fact users were not logged out
when we deactivated them for an hour (when the memcached caches would
expire).
(imported from commit 0f0a2f70e003c184106c73b22b876f57c1ef3371)
And keep the fields updated, by copying on UserProfile creation and
updating the UserProfile object whenever we're updating the User
object, and add management commands to (1) initially ensure that they
match and (2) check that they still match (aka that the updating code
is working).
The copy_user_to_userprofile migration needs to be run after this is
deployed to prod.
(imported from commit 0a598d2e10b1a7a2f5c67dd5140ea4bb8e1ec0b8)
This way we're not directly manipulating user.password() in random
management commands.
(imported from commit e6e32ae422015ab55184d5d8111148793a8aca36)
The previous situation was bad for two reasons:
(1) It had a lot of copies of the code, some of them missing pieces:
UserProfile.objects.get(user__email__iexact=foo)
This was in particular going to be inconvenient since we are dropping
the __user part of that.
(2) It didn't take advantage of our memcached caching.
(imported from commit 2325795f288a7cf306cdae191f5d3080aac0651a)
Only a few of them took a User as an argument anyway.
This is preparatory work for merging the User and UserProfile models.
(imported from commit 65b2bd2453597531bcf135ccf24d2a4615cd0d2a)
This can result in a significant performance benefit because we only
need to update the columns that changed..
(imported from commit 42bef1fcc58ad79bd864f89263fe82e90743ee5b)
The policy this implements is:
* 1 week for most persistent data (Clients, etc.)
* 1 day for messages
(imported from commit d57bb2c6b9626ffa2155c6d0ef9b60827d1f2381)
This saves 2 database queries per user in the huddle when sending the
first message to a particular huddle.
(imported from commit f71aa32df846fb4b82651a93ff9608087ffcaa5a)
Previous we had around 4 copies of the logic for deciding whether we
should publish data via a SimpleQueueClient queue, a
TornadoQueueClient queue, or to directly handle the operation, which
resulted in their getting out of sync and buggy (see e.g. the previous
commit).
We need to add a lock around adding things to the queue to work around
a bug with pika's BlockingConnection.
I should note that the previous logic in some places had a bunch of
tests of the form "elif settings.TEST_SUITE" for doing the work that
would have been done by the queue processor directly; these should
have just been "else" clauses -- since we generally want that code to
run on development environments whether or not the test suite is
currently running.
(imported from commit 16bdbed4fff04b1bda6fde3b16bee7359917720b)
Previously we had several files which initialized SimpleQueueClient()
for sending items to the UserActivity queue, even though those code
paths aren't used outside Tornado. This resulted in slower Tornado
startup times.
(imported from commit ad97021ec18d3927233744037c548c22db33c321)
This will automatically fix bugs such as one in which
internal_send_message didn't properly strip() the subject argument
before sending a message.
We change the recipient_type argument to internal_send_message to take
the recipient type name (e.g. 'stream') both to better fit the API and
also because the previous code incorrectly handled huddles.
(imported from commit 78c2596d328f6bb1ce2eaa3eed9a9e48146e3b6a)
This cache should save 2 database queries whenever we send a private
message. However, previously it was per-process (which meant it was
mostly useless) and also buggy (it never stored anything in the cache,
so that it was completely useless). Switching this to our standard
memcached setup will address both problems.
(imported from commit 1d807f30704bccf28de33a80523488aedc58a9be)
Prior to this change, any stream message sent by internal_send_message
could only be in the realm of the sender.
This was a problem most notably for... the tutorial bot, with the
hilarious consequence that the tutorial worked fine in humbughq.com,
but failed to start anywhere else.
(imported from commit 33a904a28e3a57e1a2cf9172c2e2a75b50967a50)
Previously we were doing 100 queries for new messages being sent to
the main hacker school channels; they were faster than many similar
instances because they were all done within 1 transaction, but still,
send_message_backend would be spending up to 200ms (and 148 queries)
querying the database with the previous code on prod; this new version
should do a fixed number of database queries per message.
(imported from commit 3799e63aebb6f017932ddb0fe1f6209281c0ddcf)
To work around the issue we're having with queue draining between
parallel blocking connections, use the same rabbitmq queue for both
activity and presence events, keyed on a 'type' flag in the message
itself.
(imported from commit 188e8fda1695734e52c5740db2195072cfc81479)
This cuts the MIT Zephyr home page load time and also #subscriptions
database-sourced load time for me from about a second to about 50ms on
my laptop. The home page still takes about 600ms to load due to
templating.
(imported from commit 1cfd8c750301abe6b6a881be4b2857532be947ec)
Note: When deploying, restarting the process-user-activity-commandline script is needed
(imported from commit 63ee795c9c7a7db4a40170cff5636dc1dd0b46a8)
Adds a new db table for storing presences, and an API for setting
an individual user's idleness as well as fetching all idle status
for all users in a realm
(imported from commit 5aad3510d4c90c49470c130d6dfa80f0d36b0057)
This will give us flexibility in the future to add new properties to the
list.
In order to support that, we now do a list comprehension rather than just
returning the gather_subscriptions list in get_stream_colors.
(imported from commit a3c0f749a3320f647440f800105942434da08111)
Trying to add a user to an invite-only stream that already
exists will result in in error
(imported from commit 910750580a122cee92096d7e83457cb0b8cce616)
We need this so that we can safely expunge old events without interfering with
the running server. See #414.
(imported from commit 4739e59e36ea69f877c158c13ee752bf6a2dacfe)
Before this is deployed, we need to install rabbitmq and pika on the
target server (see the puppet part of this commit for how).
When this is deployed, we need to start the new user activity bot:
./manage.py process_user_activity
in the screen session on the relevant server, or user_activity logs
won't be processed (which will eventually result in all users getting
notifications about how their mirrors are out of date).
(imported from commit 44d605aca0290bef2c94fb99267e15e26b21673b)
internal_send_message now has the ability to send personals as well as
stream messages.
This change was accidentally lost during a rebase.
(imported from commit 153a3929c5c64be82288293c1f0cc02fcc03c08d)
This allows us to handle the return_messages_immediately part of
get_updates requests without having to talk to the database.
(imported from commit ed0b7742d359efb21a0a4960f4fc25f4337e9ad4)