Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
The .data() method tries to coerce the value of the attribute into a
Javascript type, which is not what we want when the stream name looks
like a number or some other Javascript type.
(imported from commit a5f639d2ef98435cec6beacf3837fc185474a955)
On page load, the scroll_finished function was being called and
scroll_start_message was -1. This caused us to mark all messages
that we loaded through the messages initially visible as read. This
was particularly problematic because message_range iterates over all
message ids between its two arguments.
(imported from commit d93209d466797939cc9dbdbe76d25a5b20195bd2)
Previously we were doing quadratic work in the number of streams
because we had to iterate over all <li> elements every time we added
a new one.
(imported from commit 60cb97f77d161e9d8c3072157fa9c57c58f7af52)
Since we pick a new color every time we add a new subscription and
recomputing the available colors was linear in the number of
subscriptions, we were doing quadratic work on page load.
(imported from commit 647ff3cb82f405755711da47701f005e7bc0023e)
We were previously doing this on every message. Because
update_recent_subjects is linear in the number of streams in the
sidebar, this became very slow when we enabled the streams sidebar
for the MIT realm.
(imported from commit 95cd71d83bbcc08cc6c5c79ca567b5d6b9b17173)
We were previously calling sort_narrow_list after each stream was was
added. Because it is linear in the current length of the sidebar
list, we were doing quadratic work on page load. When we enabled the
streams sidebar on the MIT realm, this became problematic because of
the number of subscriptions Zephyr users have.
(imported from commit d60ddc638f0a81fbce08eecd6671e9ea6ca38515)
Messages are now explicitly condensed by our JS, which means that if
we run into some bug where our JS doesn't run, you still see the whole
message (rather than getting a clipped message).
(As of this commit, this can happen when you, e.g. are on the
Settings page and someone sends you a message.)
(imported from commit f3bec97800ea1852c80203e73552ee545fcc7e8a)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
Previously, we were having this problem where:
* You narrow to something
* That causes message_list.js:process_collapsing to run on all of the
elements in the view, which changes some of their sizes
* That causes the pane to scroll and either push the content up or
down, depending (since stuff on top of where you were is now a
different size)
* That triggers keep_pointer_in_view, which moves your pointer
Moving process_collapsing into narrow.activate doesn't obviously
fix any of this, but it does seem to mitigate the issue a bit.
In particular, we (a) process it less frequently, and (b) process it
immediately after we show the narrowed view table, which seems to
reduce the raciness of the overall experience.
This does, however, introduce a regression:
* If you receive a long message when you're on
#settings, e.g., and then go back to Home,
the message does not properly get a [More] appended
to it.
(imported from commit b1440d656cc7b71eca8af736f2f7b3aa7e0cca14)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
This shouldn't have any effect in normal realms, but for realms like
mit.edu that have large numbers of inactive streams, it will sort all
the streams that have had a recent message at the top (aka those that
aren't effectively inactive).
(imported from commit 027ce258d04b6fd58705e49f769dec7e0639bb38)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In addition to changing the trigger that updates
zephyr_message.search_tsvector to use our new text search
configuration, it also now builds the tsvector on rendered_content
instead of content and fires on update of only the subject or
rendered_content columns.
This migration is expected to take a long time. The
checkpoint_segments parameter in postgresql.conf should be
temporarily raised (probably to 32) while it is running.
(imported from commit 4535438bb33ce1db2a74ecbe91efc52afdb568f1)
Text search was not that great partially because Postgres wasn't
using a ispell dictionary (Postgres term) before. We now pull in
Hunspell and use its dictionary and affix rules.
It is Ok to run with this new configuration before updating our full
text column and index that will be coming in the next few commits.
Manual steps for deploy:
1) On both postgres0 and postgres1 (both before moving on to step 2),
install the hunspell-en-us package
2) On staging, run migration 0022
3) On both postgres0 and postgres1, copy the appropriate postgresql.conf
file over
4) On both postgres0 and postgres1, run `pg_ctlcluster 9.1 main reload`
(imported from commit 706bf0f6ecc46c712cea10b73c34fd9d1dfd4767)
There's still a lot to do here. For example, the external code
should probably go through the new Filter object directly instead of
indirectly through the narrow module.
(imported from commit 22dcd31cdebd51453f1658af52a4432b2fe7a4cb)
In the case where we're getting old messages for a narrowed view, the
anchor message id might not actually be in the result set so there's
no reason to fetch an extra message.
(imported from commit e610d1f2cb95be3ff9fce6dc95e40c560bc5bf84)
In particular, I added absolute positioning and hidden overflow,
which ensures that if an element has a persistent min-width
(like a file input field apparently does), it doesn't affect its
parent.
(imported from commit 72e7a5bee2775fb6f229899ba849292eee76aa4a)
In repeated trials, the initial data fetch used to take about 1100ms.
In practice, it was often taking >2000ms, probably due to caching
effects. This commit cuts the time down to about 300ms in repeated
trials.
Note that the semantics are changed slightly in that we may no longer
get exactly 25000 messages. However, holes in the message_id
sequence are currently very rare or non-existent so this shouldn't be
a problem and we don't care about the exact number of messages
anyway.
I believe the problem was that the query planner was unable to
effectively use the LIMIT clause to figure out that only a small
subset of zephyr_message was going to be needed. Thus, it planned
for operating on the entire table and decided it could not use a more
efficient plan because work_mem, although large, would not be large
enough to execute the query over all of zephyr_message.
The original query was:
SELECT "zephyr_message"."id", "zephyr_message"."sender_id", "zephyr_message"."recipient_id", "zephyr_message"."subject", "zephyr_message"."content", "zephyr_message"."rendered_content", "zephyr_message"."rendered_content_version", "zephyr_message"."pub_date", "zephyr_message"."sending_client_id", "zephyr_userprofile"."id", "zephyr_userprofile"."password", "zephyr_userprofile"."last_login", "zephyr_userprofile"."email", "zephyr_userprofile"."is_staff", "zephyr_userprofile"."is_active", "zephyr_userprofile"."date_joined", "zephyr_userprofile"."full_name", "zephyr_userprofile"."short_name", "zephyr_userprofile"."pointer", "zephyr_userprofile"."last_pointer_updater", "zephyr_userprofile"."realm_id", "zephyr_userprofile"."api_key", "zephyr_userprofile"."enable_desktop_notifications", "zephyr_userprofile"."enter_sends", "zephyr_userprofile"."tutorial_status", "zephyr_realm"."id", "zephyr_realm"."domain", "zephyr_realm"."restricted_to_domain", "zephyr_recipient"."id", "zephyr_recipient"."type_id", "zephyr_recipient"."type", "zephyr_client"."id", "zephyr_client"."name" FROM "zephyr_message" INNER JOIN "zephyr_userprofile" ON ( "zephyr_message"."sender_id" = "zephyr_userprofile"."id" ) INNER JOIN "zephyr_realm" ON ( "zephyr_userprofile"."realm_id" = "zephyr_realm"."id" ) INNER JOIN "zephyr_recipient" ON ( "zephyr_message"."recipient_id" = "zephyr_recipient"."id" ) INNER JOIN "zephyr_client" ON ( "zephyr_message"."sending_client_id" = "zephyr_client"."id" ) ORDER BY "zephyr_message"."id" DESC LIMIT 25000;
with query plan:
Limit (cost=0.00..27120.95 rows=25000 width=362) (actual time=0.051..1121.282 rows=25000 loops=1)
-> Nested Loop (cost=0.00..5330872.99 rows=4913981 width=362) (actual time=0.048..1081.014 rows=25000 loops=1)
-> Nested Loop (cost=0.00..3932643.31 rows=4913981 width=344) (actual time=0.042..926.398 rows=25000 loops=1)
-> Nested Loop (cost=0.00..2550275.29 rows=4913981 width=334) (actual time=0.035..752.524 rows=25000 loops=1)
Join Filter: (zephyr_message.sending_client_id = zephyr_client.id)
-> Nested Loop (cost=0.00..1739467.29 rows=4913981 width=320) (actual time=0.024..217.348 rows=25000 loops=1)
-> Index Scan Backward using zephyr_message_pkey on zephyr_message (cost=0.00..362510.09 rows=4913981 width=156) (actual time=0.014..42.097 rows=25000 loops=1)
-> Index Scan using zephyr_userprofile_pkey on zephyr_userprofile (cost=0.00..0.27 rows=1 width=164) (actual time=0.003..0.004 rows=1 loops=25000)
Index Cond: (id = zephyr_message.sender_id)
-> Materialize (cost=0.00..1.17 rows=11 width=14) (actual time=0.001..0.010 rows=11 loops=25000)
-> Seq Scan on zephyr_client (cost=0.00..1.11 rows=11 width=14) (actual time=0.002..0.010 rows=11 loops=1)
-> Index Scan using zephyr_recipient_pkey on zephyr_recipient (cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.003 rows=1 loops=25000)
Index Cond: (id = zephyr_message.recipient_id)
-> Index Scan using zephyr_realm_pkey on zephyr_realm (cost=0.00..0.27 rows=1 width=18) (actual time=0.002..0.003 rows=1 loops=25000)
Index Cond: (id = zephyr_userprofile.realm_id)
Total runtime: 1141.408 ms
In the new code, we do two queries:
SELECT "zephyr_message"."id" FROM "zephyr_message" ORDER BY "zephyr_message"."id" DESC LIMIT 1
followed by:
SELECT "zephyr_message"."id", "zephyr_message"."sender_id", "zephyr_message"."recipient_id", "zephyr_message"."subject", "zephyr_message"."content", "zephyr_message"."rendered_content", "zephyr_message"."rendered_content_version", "zephyr_message"."pub_date", "zephyr_message"."sending_client_id", "zephyr_userprofile"."id", "zephyr_userprofile"."password", "zephyr_userprofile"."last_login", "zephyr_userprofile"."email", "zephyr_userprofile"."is_staff", "zephyr_userprofile"."is_active", "zephyr_userprofile"."date_joined", "zephyr_userprofile"."full_name", "zephyr_userprofile"."short_name", "zephyr_userprofile"."pointer", "zephyr_userprofile"."last_pointer_updater", "zephyr_userprofile"."realm_id", "zephyr_userprofile"."api_key", "zephyr_userprofile"."enable_desktop_notifications", "zephyr_userprofile"."enter_sends", "zephyr_userprofile"."tutorial_status", "zephyr_realm"."id", "zephyr_realm"."domain", "zephyr_realm"."restricted_to_domain", "zephyr_recipient"."id", "zephyr_recipient"."type_id", "zephyr_recipient"."type", "zephyr_client"."id", "zephyr_client"."name" FROM "zephyr_message" INNER JOIN "zephyr_userprofile" ON ( "zephyr_message"."sender_id" = "zephyr_userprofile"."id" ) INNER JOIN "zephyr_realm" ON ( "zephyr_userprofile"."realm_id" = "zephyr_realm"."id" ) INNER JOIN "zephyr_recipient" ON ( "zephyr_message"."recipient_id" = "zephyr_recipient"."id" ) INNER JOIN "zephyr_client" ON ( "zephyr_message"."sending_client_id" = "zephyr_client"."id" ) WHERE "zephyr_message"."id" > 4941883
with the message id filled in as the result of the first query. The
new query differs from the original only in that its ORDER BY and
LIMIT clauses are replaced by a WHERE clause. The second query has
query plan:
Hash Join (cost=709.30..28048.18 rows=20544 width=365) (actual time=41.678..279.261 rows=25041 loops=1)
Hash Cond: (zephyr_message.recipient_id = zephyr_recipient.id)
-> Hash Join (cost=102.98..27056.66 rows=20544 width=355) (actual time=3.686..190.730 rows=25041 loops=1)
Hash Cond: (zephyr_message.sending_client_id = zephyr_client.id)
-> Hash Join (cost=101.73..26772.94 rows=20544 width=341) (actual time=3.649..143.695 rows=25041 loops=1)
Hash Cond: (zephyr_userprofile.realm_id = zephyr_realm.id)
-> Hash Join (cost=99.99..26488.71 rows=20544 width=323) (actual time=3.578..96.746 rows=25041 loops=1)
Hash Cond: (zephyr_message.sender_id = zephyr_userprofile.id)
-> Index Scan using zephyr_message_pkey on zephyr_message (cost=0.00..26106.24 rows=20544 width=159) (actual time=0.017..41.980 rows=25041 loops=1)
Index Cond: (id > 4941883)
-> Hash (cost=83.33..83.33 rows=1333 width=164) (actual time=3.548..3.548 rows=1333 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 275kB
-> Seq Scan on zephyr_userprofile (cost=0.00..83.33 rows=1333 width=164) (actual time=0.006..1.646 rows=1333 loops=1)
-> Hash (cost=1.33..1.33 rows=33 width=18) (actual time=0.064..0.064 rows=33 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 2kB
-> Seq Scan on zephyr_realm (cost=0.00..1.33 rows=33 width=18) (actual time=0.003..0.033 rows=33 loops=1)
-> Hash (cost=1.11..1.11 rows=11 width=14) (actual time=0.027..0.027 rows=11 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
-> Seq Scan on zephyr_client (cost=0.00..1.11 rows=11 width=14) (actual time=0.003..0.013 rows=11 loops=1)
-> Hash (cost=335.03..335.03 rows=21703 width=10) (actual time=37.974..37.974 rows=21761 loops=1)
Buckets: 4096 Batches: 1 Memory Usage: 893kB
-> Seq Scan on zephyr_recipient (cost=0.00..335.03 rows=21703 width=10) (actual time=0.004..18.443 rows=21761 loops=1)
Total runtime: 299.300 ms
(imported from commit b2a70cccc47be7970df407c6be00eccd2e8be82a)
When you create a stream that you'd previously created (then unsubscribed from),
it was possible to end up in the subscribers list twice. Once came from loading
the subscribers list from the backend, and once came from a bit of mark_subscribed
logic that only gets called if you've subscribed to that stream at least once before
in the current session.
resolves trac #1196
(imported from commit e47ff139a9c25b1b8689ea6795dfad96ae8d2591)
If the pasted content has strings, we don't upload included files and instead
allow the default behavior to take place. This deals with a quirky behavior of
pastes from MS Word, which in addition to the formatted string content also
includes a thumbnail of it. Images still paste as usual.
(imported from commit 60c4f8dd90ac2e8e38940fb302cc9d1ebeecfdf3)
This allows users on signup-eligible domains to sign up for Humbug using
Google Apps.
As part of this, we wrap the openid done view in our own code in order to
handle the "Unknown user" error. Therein, we create a PreregistrationUser
and then shunt the user through the rest of the confirmation process, pre-
filling in their name.
(imported from commit 066d9a1021384a6da2662352e62a701451bd6f44)
Changes include:
* New markup for the button in compose.html
* A hidden file input field in compose.html
* Added reference to the file input field in filedrop
initialization in compose.js
* A feature test and a click event binding for
the "Attach files" button in ui.js
* New paperclip icon reference in fonts.css
* New general hidden display classes in zephyr.css
* New composition pane button classes in zephyr.css
Fixes to the "Attach files" button commit e673bda...
Changes include:
* Fixed the feature test for (new XMLHttpRequest).upload so
it works in Firefox.
* Renamed .button to .message-control-button
* Removed stray newlines
(imported from commit c1f0834b74fd7120ec27db64ec380ffb3fa34633)
Having a message ID range significantly improves the query
performance because the number of messages Postgres has to consider
is much smaller.
(imported from commit 9b007457712f1c1502d526abea1b6fd742bd911d)
The fact that we were dumping this cache and not refilling it seems to
be one of the causes of Tornado restarts being a lot slower on prod
than on local systems.
(imported from commit a32a759f4dfb591706ede1cce2d38f5c3704193c)
Previously, our check for whether we needed to call load_old_messages
a second time on page load to get up to the present caused us to
basically always do such a call.
(imported from commit b599041e8c0853b4c8c9ab2def6679142302523e)
On my laptop, this saves about 80 milliseconds per 1000 messages
requested via get_old_messages queries. Since we only have one
memcached process and it does not run with special priority, this
might have significant impact on load during server restarts.
(imported from commit 06ad13f32f4a6d87a0664c96297ef9843f410ac5)
The internal format of 'message' had changed, so prior to this commit,
the tutorial was receiving (a) internally inconsistent, and (b)
not-what-it-expected versions of the message.
(imported from commit 233b934e6b600bd59125d133fdf7443fd8f6bbf8)
It's subtle, but the slice was in the wrong place and wasn't
actually truncating the stream name at all, so the client and
server disagreed about where the tutorial messages should go.
(It might be the case that we should accept the tutorial stream
name from the client directly, rather than computing it in two
places.)
(imported from commit 8273223f182e8ad36eaea1cbf75e1426fcfdfbab)
If the system was waiting for you to reply and you replied 'exit', the
tutorial would stop -- but our thing that was waiting for you to reply
would continue waiting. It would eventually timeout and send you the
heartbroken "I didn't hear from you so I stopped waiting" message.
Chances are, you were unsubscribed so you didn't see it, but we
should still just not send it.
(imported from commit 694e442bc29b32efd59f08b4b8b5f573768aea21)
Previously it was centered with respect to its enclosing div, which
looked slightly off.
(imported from commit 3878f162d3eb50ce85cae7054102095069aa60c8)
Pretty hackish for now since this is presumably going to all
be redone with Font Awesome icons in not too long.
(imported from commit 497d6cf18d7a8d6014a20c08d66d88c324478e55)
Timing out within the Twitter portion of the render causes the message
to still go through (without a preview). If we don't timeout here, it
causes the entire Markdown render to timeout, which rejects the
message in its entirety -- a far worse outcome.
(imported from commit f510a56f48afa46da8ec6277496fa03374cdb042)
This was apparently broken by the final revision of our fix to the
autoscrolling+narrow bugs, because it attempted to use jquery's
animation queues to restrict which animations were stopped, and this
doesn't seem to work.
(imported from commit cf97f9f56dc5a16d1aa0322b5e6ec432a76d3be2)
See PEP 328[1] for details. This feature was introduced in Python 2.5 and
will become mandatory in Python 3.
[1]: http://www.python.org/dev/peps/pep-0328
(imported from commit 7444eeba8a08d5f91b94c7921848f2274979bd76)
Don't assume clipboardData.items since it doesn't exist on Safari
Make sure there are no files if using a clipboard drop. Safari includes a blank text/uri-list
data entry
Firefox fix for image pasting
(imported from commit ea0d56fe73ca45cf2e4d437df23a4023bb649445)
Previously, we were calling util.same_stream_and_subject on a pair of
messages, one of which was a private message, which is not valid. We
should have instead been calling util.same_recipient, which checks the
message type as well.
(imported from commit bc5715807036bff1fd4f214dafad00e33678e91d)
Previously we were using message.display_recipient everywhere, which
is actually pretty confusing.
(imported from commit a58471172e28c039af8e290362e54b6660543924)
This is more consistent with how we compare subjects etc., and can be
used for comparing the subjects of a potential future message that
doesn't have a recipient id yet.
(imported from commit 93251c62dc74b3f12c6140b12fc8d6c756d35f37)
* renamed the 'icon-star' style to 'icon-vector-star' to keep backwards compatibility for icon-* classes
* changed relevant styles in zephyr.css; added FontAwesome assets
* changed relevant CSS classes in base.html, left-sidebar.html, ui.js, message.handlebars
* added new fonts.css to start consolidating all font-based assets
* added fonts.css to PIPELINE_CSS in settings.py under 'portico' and 'app'
* modified the stars test suite to reflect new star icon class name.
(imported from commit 3116fcfd4b5fb4edecd457da554fea616bb7081b)
Don't show an error if we can't handle the drop contents, since it may
just be empty rather than being a browser unsupported issue
(imported from commit 986495b4a94f4afacf75ffb35ea507d86c369b2f)
Amazingly, this saves about 250ms on every get_old_messages query in
my testing on postgres.humbughq.com (previously, we were scanning all
rows in the zephyr_usermessage table rather than using an index).
(imported from commit 566a5ef0bbf3c2198fa9e0b63d34e38ac9c57d18)
Previously it was centered with respect to its enclosing div, which
looked slightly off.
(imported from commit a56ca3e9f20e9b01236b58be7a279d28b97e74bc)
Some functions invoked by the make_script framework weren't returning
their Deferreds. I noticed this as the hello stream not getting picked
correctly because loading your real subs hadn't completed yet.
(imported from commit fac3fa36b77585bd5c03bf8fbaec052fe397a481)
Using [] doesn't cause incorrect behavior, but it's a mismatch with
how stream_info is initially declared and gives you a confusing
representation at the console.
(imported from commit c03d9e6a29ff990659f41ee478f631a019a5ac25)
Previously we added some names to your subs to use them as examples
during the tutorial. We no longer do that, but the tutorial could pick
a name from that list to recommend that you say hi on, even if you
aren't subbed.
Don't do that, and instead try to pick a stream that is in turn:
* your company name
* a probably-good stream name like social
* a stream that is hopefully not an alert stream like nagios
* eventually give up and pick anything
(imported from commit ec20c7722ea95b025dec62bcf47e33c62d1a8029)
Also handle the case of subscribing failing.
This race could cause you to not see initial traffic from the tutorial bot.
(imported from commit 395a2968555e20a4dbc106dfa9d5790e9f102a3e)
This was causing the spacing to be extra-spacious.
We only need the extra space at the bottom of a
--- Subscribed to stream x ---
message.
Also, add this space (and style things more nicely) with CSS --
<center> has been deprecated since HTML 4.01.
(imported from commit b5500bdf67bdcca5f4e5b2d3bbd76846b3961254)
This is basically just the logical extension of the previous commit
for the case where the last thing we did was subscribe or unsubscribe.
This even magically updates when you subscribe or unsubscribe from
another window :).
(imported from commit 2399329d11bf66aa0b614a21d2b3cf4035452279)
This is required to get historical messages that might be within the
message ID range of your home view.
I think we could avoid calling load_old_messages on every narrow by
tracking when the user last subscribed to each stream, and if the user
subscribed before the first message in the current home view message
window (aka the messages used for the fast-path narrowing), don't call
load_old_messages. This would happen almost every time. But it would
require a schema change to do this.
We also remove the load_more_messages call from hashchange.initialize.
It is no longer required now that we're calling load_old_messages on
each narrow anyway.
(imported from commit 1c78c183e61392429592ae89d566315be7be8999)
This works by rather than hardcoding e.g. "message__recipient",
using (prefix + "recipient") where prefix is either "message__" or "".
(imported from commit 3a27d6499bc869d6dd389b074cb7d7cf286760aa)
This should fix the problems we've been having with out-of-order
message deliveries, and is also an important prerequisite for showing
historical messages.
(imported from commit 77a18a526bf8ec4f1f70b776ac8b7e189d00bcf4)
Otherwise these logs will end up all getting split up when we switch
to the new deployment model.
(imported from commit 0514c296470be7113cab6c2f48e8dd33f1b9353d)
This is a V1 of this feature. For now, the only way to expand is by narrowing
to the stream---future revisions may add a manual toggle if it is found to be
useful.
Additionally, showing per-subject unread counts will be coming in a future revision
as well.
(imported from commit fb5df0d27e928fa3b0f32b9ff2c1c508202cf7e5)
This commit will incorrectly list past-online users as active, a shortcoming that is
addressed in the next commit
(imported from commit b018767df686f88c0ca939c067c573e4d7cea357)
Boto usually handles this for us, but can't do autodetection like it
normally would because the file path we tell Boto isn't the original name
of the file.
(imported from commit 1ad4b04baf39be8887c86f7238438580651874ff)
Otherwise it applies to all password-type <input>s, which is not necessarily
what we want.
(imported from commit da2bb86961f4ff1dcc48e89e51abac6dbea79548)
We now have the bar color to indicate (for most users) whether the password is
valid, so revert to the default validation behavior and don't validate before
the first blur.
(imported from commit 5c2f6e05a8796033942a2af62f244b61459ff1bb)
And scroll there on any error (previously, we would scroll only if we end up
submitting the form).
(imported from commit 63597c4da78ac92cd5c2314d6d174d178b1caaf3)
It seems to have no effect and does not appear anywhere else in our repository
or in jquery.validate.min.js.
(imported from commit c4d2f730f3b680e15af17cefee34f6930e64ade0)
Otherwise, if you get an error those e-mails are still around the next
time you try to invite someone.
(imported from commit b521a74f4d6c0d67271f804221f519d1aa7551ff)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
This fixes user-visible browser errors caused by trying to use the id
of messages in an empty message list.
One error could be triggered by trying to go to the end of your feed
with the End key during a reload.
Another could be triggered by trying to narrow to a stream or subject
using hotkeys while in an empty narrow.
(imported from commit a0e5456fd3b475aecac6eddd7104772baaf3aeb8)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
I noticed that on chrome, calling narrow.deactivate() actually ended
up calling itself recursively due to the hashchange code not correctly
handling the fact that in Chrome if you set
window.location.hash = '#';
and then read out the value, you get '' back out.
(imported from commit 9b5047fbe0e2ac1846e5325d066c72306634c523)
What was happening is that if you un-narrowed immediately after
receiving a message (e.g. because you just sent it), the autoscroll
animation from the zfilt table would still be running after you return
to the home view, resulting in the viewport being scrolled to an
apparently random point in the home view (even though the pointer was
still in the right place).
This cancels the autoscroll animations whenever you do one of:
(1) hashchange (e.g. to go to the settings page)
(2) select a message (covers narrowing/unnarrowing as well as keyboard hotkeys)
(3) mousewheel scroll
since those are basically the cases where we set the viewport
scrolltop directly.
Arguably this should instead be something where we somehow detect
which scroll events are triggered by what and cancel for any scroll
event not from the animation or rererendering, but that seems hard.
(imported from commit f776021303404c87b36241c733b3d1bcb083163b)
The previous code for adding users to default streams wouldn't do so
if the user didn't have a PreregistrationUser row.
(imported from commit 25f1383f6771319542d07660b29d891368889212)
Now that our plugin is in the Jenkins marketplace thing,
we don't need to have the user laboriously download it
from us and upload it themselves.
(imported from commit 25e9926f7f2314db8f3ea6c00c40514b6fd546c3)
For our primary measures of user engagement, messages sent by bots can
confuse the picture (e.g. a realm could be dead, but not appear to be,
because they didn't bother uninstalling their github and jenkins
hooks). So it's best to leave those out of our main stats.
(imported from commit 4d0f0e6442093daab164d0ed016fff1d1aa906c7)
When testing locally this bar sort of lies, because the actual bottleneck
is Django→S3.
In prod, our connection to S3 will supposudly be really fast so this won't
matter.
(imported from commit c9f4b4882cbfdf3bbb8180f1500f35d8481c1f39)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)
This creates a new management command, subscribe_new_users, which should be
run as a daemon process. When new users are created, an event is passed to
RabbitMQ including the following data:
* Email
* Full name
* IP address of the person who confirmed registration
* Time of registration confirmation
MailChimp strongly encourages the collection of the last two to enable
responses to abuse requests, and providing more data lowers the chance that
we could get banned from their service if complaints do occur.
To use this commit, you need to install the "postmonkey" module from
PyPI.
(imported from commit 20c628c3fa8bb985aaead85a80ad3b38bf94b9dc)
Apparently it no longer coalesces adjacent blank lines in a code block (which
seems like an improvement). The new test case doesn't have adjacent blank
lines and will work on old and new versions alike (tested on staging).
(imported from commit e49902be041cf1e7d6fbe489685b966cf4eae108)
We accidentally lost this when we did the User/UserProfile merge (this
commit also deletes the old code to add the auth_user index in
do-destroy-rebuild-database).
This below is mostly just notes for future reference, but when
deploying this change to staging, we should consider running the
following instead of using the migration directly:
CREATE UNIQUE INDEX CONCURRENTLY zephyr_userprofile_email_uniq ON zephyr_userprofile(email);
ALTER TABLE zephyr_userprofile ADD CONSTRAINT zephyr_userprofile_email_uniq UNIQUE USING INDEX zephyr_userprofile_email_uniq;
CREATE INDEX CONCURRENTLY zephyr_userprofile_email ON zephyr_userprofile(email);
But I think it might be the case that it's fine to just run it
directly, since the ALTER TABLE part seems to hang if there's an open
transaction working on a UserProfile object anyway.
(imported from commit 1bf34ce242de51e97c91c8bab86b6b273e17fb43)
This is preparatory for removing the StreamColor model, so we also set
things up so anything changing the StreamColor model changes the
Subscription model too.
The manual task is to run the copy_colors.py management command after
deployment to each of staging and prod.
(imported from commit 1be7523ca59f5266eb2c4dc2009e31209ed49635)
This allows blueslip to catch exceptions from the event handlers on
these elements in addition to the other benefits that not using
inline handlers provide.
(imported from commit 2bdcb2496c6c08fa7228a20ce6164b527cf64e41)
The close handler will be called on cancel anyway, so we don't need
to delete in the click handler.
(imported from commit 0fcf4b0d1408312a0889f2b69e01207c9c3835fa)
Previously, narrowing to a stream name that only contained digits
would throw an exception.
(imported from commit dc76877427078d70e3d5625622c665be3302c976)
Otherwise you could encounter errors if you POST to a method
with this decorator applied.
(imported from commit bcb31f336ea2a1eeee6b9e3e9dfeed1d205ae26a)
I generally don't like this sort of state variable, but I don't see a
better solution. The codepath is that when you start out on the
subscriptions page and then click one of the left sidebar links to
narrow to something:
(1) hashchanged() would call ui.change_tab
(2) ui.change_tab triggers a gear change event
(3) The ui.js gear-changed event handler updates the hash
Resulting in the hash ending up at "#". Since there's no easy way to
pass arguments through to the event handler, we just use a global
variable inside hash_change.js to track whether we're currently
handling a hashchange event.
(imported from commit 7bb905a223b5539240fc36de7896ee8074ebc62e)
We previously had 2 mechanisms for narrowing used by the left sidebar
-- the top few links used the hashchange mechanism, while the streams
links used a custom click handler. Both were buggy -- the hashchange
one hadn't been updated to just select the first unread message,
whereas the click handler didn't change tabs.
Fixes#1141.
(imported from commit 8a8af974e78cc5c33937ac0078f04a9b5452b94a)
This appears to have been caused by our code for preventing the
viewport from being recentered if you move the pointer away from the
edge of the viewport from a position near the edge, which was being
run even when it was not triggered by a scroll event.
(imported from commit 0a4b3dcca75a6e5dbf1beb77a5249bd6a9c61341)
The old directional hotkey calculation system was fragile, and because
of this, didn't scroll when you used the home/end keys.
(imported from commit dca4786de13a4ed2864600dadbf4b1a5ba848074)
...rather than embedding them into index.html.
This is only acceptable for dev, but the next commit adds an alternative
mechanism for prod.
There isn't actually a manual deployment step here. However, this commit won't
work on staging / prod without the next one (since we don't serve
zephyr/static/templates in prod).
(imported from commit dce7ddfe89e07afc3a96699bb972fd124335aa05)
Not needed for any specific reason, but we will need the .runtime.js file
eventually, and we should use a version of the library that matches the
Handlebars compiler.
(imported from commit 5600bc8d44b681999e2e5bbf04b890e2bb8477a1)
Beanstalk integration uses webhooks that use http basic auth to authenticate
the sending user.
(imported from commit bd65f5b2d052a3c1eb04da64d055a3640a384892)
I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
When this is deployed to staging, we need to run
./manage.py logout_all_users --realm=humbughq.com
When this is deployed to prod, we need to run
./manage.py logout_all_users
(imported from commit d6c6ea4b1c347f3d9122742db23c7b67767a7349)
This is intended to be used logging out users during our deployment of
the UserProfile merge, but it could be useful for other things too.
(imported from commit bfe896d854f997f7a4d06e5bc0f19ec5b1aa5e69)
Previously, we weren't clearing the users out of memcached (we just
killed them in the database), so in fact users were not logged out
when we deactivated them for an hour (when the memcached caches would
expire).
(imported from commit 0f0a2f70e003c184106c73b22b876f57c1ef3371)
The associated function was moved into zephyr.lib, but the file
location was never updated.
(imported from commit 24c3348533324b0af7c52d6a121eef8b00615275)
And keep the fields updated, by copying on UserProfile creation and
updating the UserProfile object whenever we're updating the User
object, and add management commands to (1) initially ensure that they
match and (2) check that they still match (aka that the updating code
is working).
The copy_user_to_userprofile migration needs to be run after this is
deployed to prod.
(imported from commit 0a598d2e10b1a7a2f5c67dd5140ea4bb8e1ec0b8)