Otherwise you could encounter errors if you POST to a method
with this decorator applied.
(imported from commit bcb31f336ea2a1eeee6b9e3e9dfeed1d205ae26a)
I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
When this is deployed to staging, we need to run
./manage.py logout_all_users --realm=humbughq.com
When this is deployed to prod, we need to run
./manage.py logout_all_users
(imported from commit d6c6ea4b1c347f3d9122742db23c7b67767a7349)
The previous situation was bad for two reasons:
(1) It had a lot of copies of the code, some of them missing pieces:
UserProfile.objects.get(user__email__iexact=foo)
This was in particular going to be inconvenient since we are dropping
the __user part of that.
(2) It didn't take advantage of our memcached caching.
(imported from commit 2325795f288a7cf306cdae191f5d3080aac0651a)
This is to allow flexibility in functions that we think should be callable
via either GET or POST.
As part of this, POSTRequestMock was extended to populate the REQUEST
dict.
(imported from commit b9d32d2b65ff8a25885452992cf7dd37b9664246)
This includes a process_patch_as_post decorator which enables this view
to be invoked as a PATCH on an object.
Hopefully this decorator can go away once POST values are correctly parsed
in Django for PATCH verb invocations.
(imported from commit 6cf9d69cfb9dea5354ea37408566146757b5be54)
The policy this implements is:
* 1 week for most persistent data (Clients, etc.)
* 1 day for messages
(imported from commit d57bb2c6b9626ffa2155c6d0ef9b60827d1f2381)
Previous we had around 4 copies of the logic for deciding whether we
should publish data via a SimpleQueueClient queue, a
TornadoQueueClient queue, or to directly handle the operation, which
resulted in their getting out of sync and buggy (see e.g. the previous
commit).
We need to add a lock around adding things to the queue to work around
a bug with pika's BlockingConnection.
I should note that the previous logic in some places had a bunch of
tests of the form "elif settings.TEST_SUITE" for doing the work that
would have been done by the queue processor directly; these should
have just been "else" clauses -- since we generally want that code to
run on development environments whether or not the test suite is
currently running.
(imported from commit 16bdbed4fff04b1bda6fde3b16bee7359917720b)
Previously we only used these caches for Tornado requests, because we
were not updating memcached when e.g. the user's pointer changed, and
so functions like update_pointer would not work correctly.
Now that we are updated memcached when the User and UserProfile
objects change, we can use these for all requests.
This saves 2 database queries on every Django request to the server.
(imported from commit aa5bffd885d14bde38b95e80a226bd5ab66f253d)
To work around the issue we're having with queue draining between
parallel blocking connections, use the same rabbitmq queue for both
activity and presence events, keyed on a 'type' flag in the message
itself.
(imported from commit 188e8fda1695734e52c5740db2195072cfc81479)
This should make it much easier to debug issues where a particular
user is hosing our API, for example.
(imported from commit cbea49fd1e11805cadf564bd9160d3d6bf7e0eca)
Note: When deploying, restarting the process-user-activity-commandline script is needed
(imported from commit 63ee795c9c7a7db4a40170cff5636dc1dd0b46a8)
Previously we only got the user ID for /json requests, not /api
requests, and also only got the user ID, not the email address.
(imported from commit c3625f9c1a48430e35183be6c90a7855f3714948)
Before this is deployed, we need to install rabbitmq and pika on the
target server (see the puppet part of this commit for how).
When this is deployed, we need to start the new user activity bot:
./manage.py process_user_activity
in the screen session on the relevant server, or user_activity logs
won't be processed (which will eventually result in all users getting
notifications about how their mirrors are out of date).
(imported from commit 44d605aca0290bef2c94fb99267e15e26b21673b)
This commit has the effect of eliminating all of the non-UserActivity
database queries from the Tornado process -- at least in the uncached
case.
This is safe to do, if a bit fragile, since our Tornado code only
accesses these objects (as opposed to their IDs) in a few places that
are all fine with old data, and I don't expect us to add any new ones
soon:
* UserActivity logging, which I plan to move out of Tornado entirely
* Checking whether we're authenticated in our decorators (which could
be simplified -- the actual security check is just whether the
Django session object has a particular field)
* Checking the user realm for whether we should sync to the client
notices about their Zephyr mirror being up to date, which is quite
static and I think we can move out of this code path.
But implementation constraints around mapping the user_ids to
user_profile_ids mean that it makes sense to get the actual objects
for now.
This code is not what I want to do long-term. I expect we'll be able
to clean up the dual User/UserProfile nonsense once we integrate the
upcoming Django 1.5 release, with its support for pluggable User
models, and after that I change, I expect it'll be fairly easy to make
the Tornado code only work with the user ID, not the actual objects.
(imported from commit 82e25b62fd0e3af7c86040600c63a4deec7bec06)
This should save a database query when we later need to access fields
such as the user's realm name in format_updates_response.
(imported from commit ceef726db9e917cfb0b47061130d7299ee64890d)
The transaction.commit() line inside the except IntegrityError clause
doesn't work unless we've entered transaction management.
(imported from commit 2ae520e05c9a19ec35af7c244631b01d4b9598d6)
Adding a positional argument caused a problem when
@authenticated_api_view started using @has_request_variables
internally. The 'handler' argument used to be passed through
positionally to the wrapped function, but when using
@has_request_variables, the wrapper inside @authenticated_api_view
had to take additional arguments. The handler argument was then
assigned to one of those parameters instead of being passed through.
(imported from commit 66240bd465c803ddcbf4a603509051fca7381468)