The optimizations are:
* Sort over the list of subscriptions instead of the DOM li elements.
This requires storing the li elements for each sub on the sub object.
* Do a bulk insert of the li elements instead of doing them one by one.
(imported from commit 1a987799930fc677e25f0bc2dcf66f83a4ac3163)
We now fire three events:
* subscription_add_done - fired when subs.js has finished handling a
subscription_add event (all structures are set up, etc.)
* subscription_remove_done - fired when subs.js has finished handling a
subscription_remove event
* sub_obj_created - fired when subs.js has created a sub object. This
happens both when a new subscription is added and at page startup for
all existing subscriptions
These events are fired whenever sub objects are created, even when
not tied to a subscription event.
(imported from commit a4863451f37e7fdbad480696b388ea788b01d6b9)
* Start a compose when we do a file upload
* Restore the "Formatting" and "Feedback" links.
* Dismiss composebox error messages when we defocus composebox
Realistically, the "correct" way to do this is not to have to
explicitly manage the composebox's state, as we do now -- it should
just be 100% visible and ready to send any time you click 'send'; it
shouldn't need to have first been composebox.start()ed.
(imported from commit 7f1725c229ed968a9b5500b25d600306173182a0)
What changed:
* Vector icons swapped in for the left sidebar buttons and filters
* Lighter font weight in the stream filters list
* Round color swatches in the stream filters list, with an inner shadow
* Tighter line height in the individual messages in the message pane
* Fixed button widths in the left sidebar (so the buttons are equal in width)
(imported from commit 337dc4a3d8e29945cfc8cfb9524ac76a7b038ad8)
Recently the typeahead for streams in the compose box was modified so
that streams only matched queries when the query was a prefix to the
stream. When that change was made, the old highlighting behavior
had been mistakenly left in place. This commit fixes the highlighting.
(imported from commit b7ec33daba46978df58eb91306686a4f1a57c7fa)
* Properly resize compose area when we cancel out of it.
* Re-enable clicking on 'reply' in popover.
(The issue with the latter is that clicking on "Reply" started
a reply and then bubbled up and triggered our code that canceled
a reply because you clicked out of the composebox.)
(imported from commit 25d0ea58b72d2ee246217baf3eb9cac58fc858f5)
Really, the "correct" way to do this is to undo "scrolltheworld", and
then just have a compose div that always lives underneath the message
list div. (This will also allow us to deal much more reasonably with
the whole "Is the composebox in focus" thing.)
In the interest of prototyping something more rapidly, though, we
adopt the somewhat more hackish approach, with the understanding that
much of it will probably be simplified later.
(imported from commit e2754be155c522b6dac28e7b84c62bd2030217c8)
We previously kept the lists in the DOM for all streams and updated
them all when new messages arrived. This was very expensive for
large numbers of streams, so we now just build the subject lists on
demand.
(imported from commit 937ad4322ce2014200aeae8645f79875f6af576e)
This commit also fixes a bug where "starred messages" wouldn't get
bolded when you narrowed to starred messages. However, it also
introduces a regression where subjects aren't highlighted correctly
on load to a narrow which will be fixed shortly.
(imported from commit 411575d92762e41d04c1baf126c0ab1dfb4225a5)
This will matter shortly as hashchange.initialize can call
narrow.activate(), which fires an event handler.
Really, I have no idea why we have these initialize() methods anyway
and we don't just do initialization on document.ready.
(imported from commit 3a6a80e1426b03439b95cae3f142a4b1c43125e9)
We memoize add_message_metadata by checking if the message is already
in the all_msg_list. Therefore, we need to add messages to that
message list before we add it to the narrowed_msg_list.
(imported from commit 4346179376ef6f982162c02c6152a0d294bfb2c0)
The String.localeCompare function is really slow, at least partially
because it creates a locale-aware collator object each time. So now,
when we can, we create and cache a locale-aware collator object.
However, this is not supported on most browsers, so we fall back to a
non-locale-aware comparison. This is not ideal, but for now we are
mostly working with English-speaking customers.
(imported from commit 51aa02e3b9fe4a0ef0cb084874fe26e91c57f65e)
Addionally, print out a blueslip error instead of dying
if a stream id is accessed when there is no stream to get
(imported from commit 0d6466ca79312a4fb9a235f313303ac5246afb35)
This decouples from Chrome notifications, which gives us cross-platform
support in at least modern browsers.
We log this action so its replayable in our message logs.
This implements the model change indicated by the previous schema commit.
(imported from commit b21213cdde54f43670bbb0bf1f607147fc732b38)
We test if the user supports sound in their browser, then determine which
sort of sound their browser supports.
When, whenever we show a desktop notification we also play a sound.
(imported from commit dae41e70a6e4f6ed60ffedaac546d77baee52675)
Since they can't be parsed, probably the best thing to do is to send
the user to the home tab; we could add in showing an error message but
then we'd need a way to clear the error message -- better to just have
this work.
(imported from commit 67c0475ff06eb0431621eef60b9c50287a158232)
Previously, we were fetching Message.objects.select_related() from the
database, even if we actually ended up fetching the message dicts from
memcached and thus not actually using them. Especially in the cached
case, this resulted in a lot of overhead where the Django ORM put
together Message objects with lots of data in them that were never
used. This commit switches the model to only fetch the full message
objects from the database for those messages which are not found in
the memcached caches.
Here are the timings for get_old_messages before this patch was applied:
(cached)
127ms (db: 42ms/2q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 105ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
315ms (mem: 6ms/41) (db: 90ms/22q) /json/get_old_messages (starnine@mit.edu via website)
507ms (db: 94ms/14q) /json/get_old_messages (starnine@mit.edu via website)
Here are the timings for get_old_messages after this patch was applied:
(cached)
80ms (db: 9ms/2q) /json/get_old_messages (starnine@mit.edu via website)
133ms (db: 4ms/1q) /json/get_old_messages (starnine@mit.edu via website)
(uncached)
230ms (mem: 9ms/41) (db: 48ms/23q) /json/get_old_messages (starnine@mit.edu via website)
385ms (db: 55ms/15q) /json/get_old_messages (starnine@mit.edu via website)
(imported from commit c4748513392a906393314aa7cd41d98a69865411)
The .data() method tries to coerce the value of the attribute into a
Javascript type, which is not what we want when the stream name looks
like a number or some other Javascript type.
(imported from commit a5f639d2ef98435cec6beacf3837fc185474a955)
On page load, the scroll_finished function was being called and
scroll_start_message was -1. This caused us to mark all messages
that we loaded through the messages initially visible as read. This
was particularly problematic because message_range iterates over all
message ids between its two arguments.
(imported from commit d93209d466797939cc9dbdbe76d25a5b20195bd2)
Previously we were doing quadratic work in the number of streams
because we had to iterate over all <li> elements every time we added
a new one.
(imported from commit 60cb97f77d161e9d8c3072157fa9c57c58f7af52)
Since we pick a new color every time we add a new subscription and
recomputing the available colors was linear in the number of
subscriptions, we were doing quadratic work on page load.
(imported from commit 647ff3cb82f405755711da47701f005e7bc0023e)
We were previously doing this on every message. Because
update_recent_subjects is linear in the number of streams in the
sidebar, this became very slow when we enabled the streams sidebar
for the MIT realm.
(imported from commit 95cd71d83bbcc08cc6c5c79ca567b5d6b9b17173)
We were previously calling sort_narrow_list after each stream was was
added. Because it is linear in the current length of the sidebar
list, we were doing quadratic work on page load. When we enabled the
streams sidebar on the MIT realm, this became problematic because of
the number of subscriptions Zephyr users have.
(imported from commit d60ddc638f0a81fbce08eecd6671e9ea6ca38515)
Messages are now explicitly condensed by our JS, which means that if
we run into some bug where our JS doesn't run, you still see the whole
message (rather than getting a clipped message).
(As of this commit, this can happen when you, e.g. are on the
Settings page and someone sends you a message.)
(imported from commit f3bec97800ea1852c80203e73552ee545fcc7e8a)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
Previously, we were having this problem where:
* You narrow to something
* That causes message_list.js:process_collapsing to run on all of the
elements in the view, which changes some of their sizes
* That causes the pane to scroll and either push the content up or
down, depending (since stuff on top of where you were is now a
different size)
* That triggers keep_pointer_in_view, which moves your pointer
Moving process_collapsing into narrow.activate doesn't obviously
fix any of this, but it does seem to mitigate the issue a bit.
In particular, we (a) process it less frequently, and (b) process it
immediately after we show the narrowed view table, which seems to
reduce the raciness of the overall experience.
This does, however, introduce a regression:
* If you receive a long message when you're on
#settings, e.g., and then go back to Home,
the message does not properly get a [More] appended
to it.
(imported from commit b1440d656cc7b71eca8af736f2f7b3aa7e0cca14)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
This shouldn't have any effect in normal realms, but for realms like
mit.edu that have large numbers of inactive streams, it will sort all
the streams that have had a recent message at the top (aka those that
aren't effectively inactive).
(imported from commit 027ce258d04b6fd58705e49f769dec7e0639bb38)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In addition to changing the trigger that updates
zephyr_message.search_tsvector to use our new text search
configuration, it also now builds the tsvector on rendered_content
instead of content and fires on update of only the subject or
rendered_content columns.
This migration is expected to take a long time. The
checkpoint_segments parameter in postgresql.conf should be
temporarily raised (probably to 32) while it is running.
(imported from commit 4535438bb33ce1db2a74ecbe91efc52afdb568f1)
Text search was not that great partially because Postgres wasn't
using a ispell dictionary (Postgres term) before. We now pull in
Hunspell and use its dictionary and affix rules.
It is Ok to run with this new configuration before updating our full
text column and index that will be coming in the next few commits.
Manual steps for deploy:
1) On both postgres0 and postgres1 (both before moving on to step 2),
install the hunspell-en-us package
2) On staging, run migration 0022
3) On both postgres0 and postgres1, copy the appropriate postgresql.conf
file over
4) On both postgres0 and postgres1, run `pg_ctlcluster 9.1 main reload`
(imported from commit 706bf0f6ecc46c712cea10b73c34fd9d1dfd4767)