I didn't use red and green for fear of it not being visible to
color-blind users. We may need to tweak the colors.
(imported from commit 59c4f1dac549a248783e4c3b3ec472d8cb690df5)
I would really like to parse the HTML we produce from the library to
ensure that we don't generate malformed-HTML. This is unfortunately
hard because we both want pretty strict parsing and we want to parse
html5 fragments. For now, we just do a basic sanity check.
We also may want to switch to Google Diff-Match-Patch, as that can
clean up the resulting diffs.
(imported from commit 3772f92135cfd7423c335335f861f2c11462a8db)
Previously we were generating API keys deterministically using a hash
of the user's email address; this is clearly not a good long-term
approach.
(imported from commit 14d0c7c9edbc45b3ae1d17a43765ad9726338d4d)
We get too many error reports from it, which is bad for us actually
fixing the other errors that we do have.
(imported from commit 8442fe4251adb15a01b4e61ebcd07bc270b08631)
Messages that get sent out when someone subscribes many people to a new stream each
cause individual database queries (and their associated transactions). With the patched
bulk_create (which sets the .id on created objects), we can reduce this query down to a constant
number of queries on the Message and UserMessage tables.
Note for deployment (local dev, staging and prod):
you must be running a patched django, found here: https://github.com/acrefoot/django/branches
use this branch: acrefoot-bulk_create_with_id-1.5.1
on acrefoot-bulk_create_with_id-1.5.1
relevant sha1: ac6d885b811f7e2e34f0db0da217983f7dfd357f
(imported from commit b0dab9dac784d3ff47751e65bf22c2dddc22edf5)
We also record the historical edits to the message in this JSON format:
[{"prev_content": "new test message 14", "timestamp": 1369157249},
{"prev_content": "new test message 13", "timestamp": 1369157118}]
but we don't actually do anything with the information as of yet.
(imported from commit 2d5ca449b87b33ad035ab0e076a22e150c8e7267)
There was no benefit to our various link processors all doing
independent scans through the list of messages, and this makes it much
easier to understand the logic of how each link will be handled, and
also makes policies like "don't process links if there are more than 5
of then" easier to implement coherently.
(imported from commit 4affdeab889ba89b99eec905fdf871e78bbc3dd4)
Currently the interface for editing messages is limited to a
command-line API tool; it's great for testing with e.g.:
./api/examples/edit-message --message=348135 --content="test $(date +%s)" --site=http://localhost:9991 --subject="test"
The next commit will add a user interface for actually doing the editing.
(imported from commit bdd408cec2946f31c2292e44f724f96ed5938791)
This, combined with acrefoot's work on sending the notifications in
bulk, resolves trac #1142 -- we do only 10 database queries and the
whole operation completes in about 300ms on my laptop.
(imported from commit 36b5bb836bc6c713903d1ca72e39af87775dc469)
Since we log to statsd our cache time lookups by cache key, using a unique
tweet id for each lookup was just filling up our cache without being useful.
Also, log database cache lookups in a further namespace to distinguish between
memcached caches
(imported from commit a2a16b777fb7ab8cd066feee7344f9c8a3c107f5)
Users can send to any stream except invite-only streams that they
aren't subscribed to. Bots can send to any stream except invite-only
streams that neither they nor their owner is subscribed to.
(imported from commit 623d34d249d923611ca7ca781b5b55205cd3e548)
After this change, the memcached time consumed by doing
get_old_messages for 200 and 1000 messages respectively now look like
this:
200 63ms (mem: 6ms/3) (db: 4ms/2q) /json/get_old_messages
200 178ms (mem: 67ms/2) (db: 6ms/1q) /json/get_old_messages
which might help explain where the time is going on prod for some of
our slower queries.
(imported from commit b8fe83b175914b6796922a65a1c5537f4e7a9429)