The .data() method tries to coerce the value of the attribute into a
Javascript type, which is not what we want when the stream name looks
like a number or some other Javascript type.
(imported from commit a5f639d2ef98435cec6beacf3837fc185474a955)
On page load, the scroll_finished function was being called and
scroll_start_message was -1. This caused us to mark all messages
that we loaded through the messages initially visible as read. This
was particularly problematic because message_range iterates over all
message ids between its two arguments.
(imported from commit d93209d466797939cc9dbdbe76d25a5b20195bd2)
Previously we were doing quadratic work in the number of streams
because we had to iterate over all <li> elements every time we added
a new one.
(imported from commit 60cb97f77d161e9d8c3072157fa9c57c58f7af52)
Since we pick a new color every time we add a new subscription and
recomputing the available colors was linear in the number of
subscriptions, we were doing quadratic work on page load.
(imported from commit 647ff3cb82f405755711da47701f005e7bc0023e)
We were previously doing this on every message. Because
update_recent_subjects is linear in the number of streams in the
sidebar, this became very slow when we enabled the streams sidebar
for the MIT realm.
(imported from commit 95cd71d83bbcc08cc6c5c79ca567b5d6b9b17173)
We were previously calling sort_narrow_list after each stream was was
added. Because it is linear in the current length of the sidebar
list, we were doing quadratic work on page load. When we enabled the
streams sidebar on the MIT realm, this became problematic because of
the number of subscriptions Zephyr users have.
(imported from commit d60ddc638f0a81fbce08eecd6671e9ea6ca38515)
Messages are now explicitly condensed by our JS, which means that if
we run into some bug where our JS doesn't run, you still see the whole
message (rather than getting a clipped message).
(As of this commit, this can happen when you, e.g. are on the
Settings page and someone sends you a message.)
(imported from commit f3bec97800ea1852c80203e73552ee545fcc7e8a)
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.
(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
Previously, we were having this problem where:
* You narrow to something
* That causes message_list.js:process_collapsing to run on all of the
elements in the view, which changes some of their sizes
* That causes the pane to scroll and either push the content up or
down, depending (since stuff on top of where you were is now a
different size)
* That triggers keep_pointer_in_view, which moves your pointer
Moving process_collapsing into narrow.activate doesn't obviously
fix any of this, but it does seem to mitigate the issue a bit.
In particular, we (a) process it less frequently, and (b) process it
immediately after we show the narrowed view table, which seems to
reduce the raciness of the overall experience.
This does, however, introduce a regression:
* If you receive a long message when you're on
#settings, e.g., and then go back to Home,
the message does not properly get a [More] appended
to it.
(imported from commit b1440d656cc7b71eca8af736f2f7b3aa7e0cca14)
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.
(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
This post-commit hook depends on pysvn. After a transaction is completed,
a Humbug is sent to a configurable stream with the repo modified, actor,
and commit message.
(imported from commit 75cab82d5fe993ea7c4c05be07a7b61e770aff81)
This shouldn't have any effect in normal realms, but for realms like
mit.edu that have large numbers of inactive streams, it will sort all
the streams that have had a recent message at the top (aka those that
aren't effectively inactive).
(imported from commit 027ce258d04b6fd58705e49f769dec7e0639bb38)
Previously we used the value of gafyd_name in forms, which means that if
it was undefined we'd end up with a string literal "None" being passed to
the registration form by the confirmation redirector.
(imported from commit c8fbb749bb793c8e927e86603ce196bf810f3f6a)
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side. Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup. We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML. Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.
(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
In addition to changing the trigger that updates
zephyr_message.search_tsvector to use our new text search
configuration, it also now builds the tsvector on rendered_content
instead of content and fires on update of only the subject or
rendered_content columns.
This migration is expected to take a long time. The
checkpoint_segments parameter in postgresql.conf should be
temporarily raised (probably to 32) while it is running.
(imported from commit 4535438bb33ce1db2a74ecbe91efc52afdb568f1)
Text search was not that great partially because Postgres wasn't
using a ispell dictionary (Postgres term) before. We now pull in
Hunspell and use its dictionary and affix rules.
It is Ok to run with this new configuration before updating our full
text column and index that will be coming in the next few commits.
Manual steps for deploy:
1) On both postgres0 and postgres1 (both before moving on to step 2),
install the hunspell-en-us package
2) On staging, run migration 0022
3) On both postgres0 and postgres1, copy the appropriate postgresql.conf
file over
4) On both postgres0 and postgres1, run `pg_ctlcluster 9.1 main reload`
(imported from commit 706bf0f6ecc46c712cea10b73c34fd9d1dfd4767)