I don't fully understand the need for this, but I have seen some
tracebacks on app that complain:
File "/home/humbug/humbug-deployments/2013-07-11-19-28-10/zephyr/lib/actions.py", line 1289, in handle_missedmessage_emails
timestamp - user_profile.last_reminder < waitperiod):
TypeError: can't subtract offset-naive and offset-aware datetimes
Since timestamp in this case comes from timestamp_to_datetime
that explicitly sets the tzinfo, we know it's tz-aware. The only
other possibility is that user_profile.last_reminder is **not**
tz-aware, though I am not sure why that would be the case.
(imported from commit 67e33f4510e91fa9de504f0c610515581312c98b)
It seems that even though we set the From to be <noreply@humbughq.com>
it's possible that when sending mail via Google it automatically sets
the From: field to be humbug@humbughq.com. Here we set Reply-To to noreply@
in all cases explicitly in order to avoid having replies sent to our
inboxes.
(imported from commit 5fa643be2b78fd632e310836bf1be862d6f1d333)
This would have made reactivations hard, and doesn't really buy us much
additional security.
During deactivation, all a user's current sessions are deactivated and
they are marked as not active. This prevents them from logging in via
the web UI, and makes their API key unusable.
Randomizing their password is probably gratuitious, especially as we
start to allow authorized end-users to deactivate others.
(imported from commit c63d23816da0452a1df821f2fa6c1db2761733da)
Prior to this commit, populate_db would crash if you had ever deactivated
a user in your development instance's message log.
(imported from commit 227b2c0226a46ef5680443d3dbf62a13ce961e64)
* This makes bugdown.convert take a `message` parameter. Properties
for parsed mentions are added to the message object by the `Pattern`
for use in do_send_messages.
* Refactor repeated markdown rendering code into `Message` model methods.
(imported from commit 4f0ed5570104c0210f984b6de21e9048e2b53fa0)
After fixing the high numbers of database queries earlier in this
branch, I found that sending 500 RabbitMQ messages for a bulk change
in subscriptions was consuming more than half the time for these (and
then we'd end up with 500 events in a queue). To handle this, we
create a "user X subscribed to these N streams" event, rather than
sending one event for each individual subscription.
(imported from commit 44a34a9fab9b67e9f0da6fee53335d8c5030392b)
This improves the performance of unsubscribing to N streams by more
than a factor of 10 for large N.
(imported from commit a529e6d3ac4452f49c2294908d275280019bbd05)
Previously we only used bulk queries when adding many users to a
single stream, resulting in very slow performance when subscribing
users to large numbers of streams (as happens when setting up a new
MIT realm user).
(imported from commit 849fa7b2a1a146c0a9adc1c727c20c9fbfb7b425)
This comment was only ever accurate for prototype versions of
bulk_add_subscriptions prior to it being committed to master.
(imported from commit 89b9dc49423c45553cb6c810d97eea4583ff0f69)
This change removes an "if True:" that was
introduced to make the prior commit a bit more readable.
It also combines two loops, since the second loop is no
longer conditional.
(imported from commit df58f1e5de72d5669f6468fbff54fb62cd22cedb)
The tests in GetUpdatesTest had some callback logic that has
been dead code for at least three months. We now fully exercise
the callback codepath and make sure that the callbacks do happen.
(imported from commit f5d8fbab28ecc34dc81d3d0c29058b66c10f378f)
Compare two user objects by id to prevent false negatives
when the objects are fetched thru different paths.
(imported from commit a41f30d27e2b8021600d89f32d6526f48677fd95)
Trying to check whether a Django model object is inside a set of other
Django models is not correct in general, e.g.:
UserProfile.objects.only("id").get(id=17) in set([UserProfile.objects.get(id=17)])
returns False.
This bug appears twice in the function, once when computing which
users were mentioned and again when pushing the flags through to
Tornado.
(imported from commit b09ed550258f9df2611e1b0a60f87c48a51830f8)
Previously we had an issue that every other update_active_status
request for a particular realm would result in doing the expensive
query to compute the list of active users in that realm. It turned
out this was because on every update_active_status request, we'd queue
an event that would have the effect of clearing the cache, even if
nobody's tatus changed. This fixes that issue, by only clearing the
cache for a realm if someone's status actually changed (or the 60s
timeout expires).
(imported from commit d5b829fe255a31c8cecb58458738f1e72a2cf6de)
memcached stores objects sent to it using pickling, which is very
slow. We work around this by sending memcached strings (i.e. JSON
dumps); pickling doesn't slow things down too much if all it is
getting is a string.
(imported from commit 0f0e534182eccb76c5731198e05a9324a1cef316)
This saves something like 15ms on our 1000 message get_old_messages
queries, and will save even more when we start sending JSON dumps into
our memcached system.
We need to install python-ujson on servers and dev instances before
pushing this to prod.
(imported from commit 373690b7c056d00d2299a7588a33f025104bfbca)
We had a few bugs where we were using a raw Django database query to
get a UserProfile object. This might seem OK, but going through
memcached is more efficient, and also guarantees that we get back the
.select_related() version of the object, so that if we later access
related fields like user_profile.realm.domain, we don't end up doing a
second database query as well.
Fixing these should in practice save a substantial number of database
queries on handling update_status_list requests, which happen very
often and access user_profile.realm.domain.
(imported from commit 0a2027da1b5bbc7a4f6c6927aca498530d7a4977)
I've tried to do this in a way that's scalable and easily configured,
so that we can add new such filters for customers on-demand without
needing to add anything other than a bit of configuration.
Once we're confident in the arguments to this system, I think we'll
want to move the regular expression lists into the database so that we
don't need to do a prod push to modify the regular expression lists.
The initial set of regular expressions are:
(1) Linkifying e.g. "trac #224" in the Humbug realm, so we're exercising this code.
(2) The various ticket number things CUSTOMER7 uses for the CUSTOMER7 realm.
(imported from commit 992b0937b9012c15a7c2f585eb0aacb221c52e01)
I didn't use red and green for fear of it not being visible to
color-blind users. We may need to tweak the colors.
(imported from commit 59c4f1dac549a248783e4c3b3ec472d8cb690df5)
Messages that get sent out when someone subscribes many people to a new stream each
cause individual database queries (and their associated transactions). With the patched
bulk_create (which sets the .id on created objects), we can reduce this query down to a constant
number of queries on the Message and UserMessage tables.
Note for deployment (local dev, staging and prod):
you must be running a patched django, found here: https://github.com/acrefoot/django/branches
use this branch: acrefoot-bulk_create_with_id-1.5.1
on acrefoot-bulk_create_with_id-1.5.1
relevant sha1: ac6d885b811f7e2e34f0db0da217983f7dfd357f
(imported from commit b0dab9dac784d3ff47751e65bf22c2dddc22edf5)