Commit Graph

4628 Commits

Author SHA1 Message Date
Jessica McKellar ce218dec48 realm_stats: only count active (not deactivated) users.
(imported from commit e35838d5bb6260efb33290ab63fdbd10a3d9d879)
2013-05-01 21:59:28 -04:00
Jessica McKellar 5fb65712af realm_stats: only include streams with subscribers (ie not tutorial streams).
(imported from commit 870ce51df191611569bdd7c2509cc679748ea201)
2013-05-01 21:59:28 -04:00
Zev Benjamin e1c8d2f50f Return match_subject and match_content from messages_in_narrow()
This fixes a bug where if you were narrowed to a search and received
a new message that belonged in that search, the message would appear
to have an empty subject and content.

(imported from commit fe1dbf584d3659d57c5b70c7eb45cb22bbc9732f)
2013-05-01 21:52:04 -04:00
Jessica McKellar cc321636b1 Add a management command to bankrupt users.
(imported from commit 58fbd08fc31a69c9ee7fb73b9302d44eb87db1fa)
2013-05-01 21:16:40 -04:00
Steve Howell 7dd64eb157 Handle github sending empty string for the stream.
(imported from commit 2fd85db1828be44ef63920d5df347b5f85acb573)
2013-05-01 17:39:54 -04:00
Leo Franchi d0b8a2fd21 Mark messages as read when using the End key
(imported from commit b2495cb27b1362d037e786db7f108540f2ce655b)
2013-05-01 17:26:19 -04:00
Leo Franchi de44f08772 Fix typo and send proper dbtime to statsd
(imported from commit 87e982de71e005ed110200b2f9afeec488dcfc51)
2013-05-01 17:24:38 -04:00
Leo Franchi 5a5ed28ab0 Create aggregate all-active-users data
(imported from commit 4009a4eb15a3efb1c05e1e80151db7d1074f0617)
2013-05-01 17:24:38 -04:00
Steve Howell be1e18f864 Support branches whitelist for github
(imported from commit 066bb8ee028778cb39b43afc9737fd2117c91928)
2013-05-01 17:19:53 -04:00
Steve Howell b645c67994 Use stream from github webhook call (if supplied).
(imported from commit 4f57c4dec8ab5e833583a2b5912a92e8a2bd34c0)
2013-05-01 17:19:53 -04:00
Waseem Daher 435098d001 Process message condensing in narrow.activate rather than hashchange.
Previously, we were having this problem where:
* You narrow to something
* That causes message_list.js:process_collapsing to run on all of the
  elements in the view, which changes some of their sizes
* That causes the pane to scroll and either push the content up or
  down, depending (since stuff on top of where you were is now a
  different size)
* That triggers keep_pointer_in_view, which moves your pointer

Moving process_collapsing into narrow.activate doesn't obviously
fix any of this, but it does seem to mitigate the issue a bit.

In particular, we (a) process it less frequently, and (b) process it
immediately after we show the narrowed view table, which seems to
reduce the raciness of the overall experience.

This does, however, introduce a regression:
* If you receive a long message when you're on
  #settings, e.g., and then go back to Home,
  the message does not properly get a [More] appended
  to it.

(imported from commit b1440d656cc7b71eca8af736f2f7b3aa7e0cca14)
2013-05-01 16:56:03 -04:00
Tim Abbott 9f0fc7c031 blueslip: Include window.location.href in error reports.
This can be useful for debugging what sort of narrow is happening in
addition to the URI decoding bug we're currently experiencing.

(imported from commit 0cb55fec4ac1afa986c747eb79236b4300c9e636)
2013-05-01 16:10:35 -04:00
Luke Faraone 4828c90c64 Implement Humbug SVN integration.
This post-commit hook depends on pysvn. After a transaction is completed,
a Humbug is sent to a configurable stream with the repo modified, actor,
and commit message.

(imported from commit 75cab82d5fe993ea7c4c05be07a7b61e770aff81)
2013-05-01 11:37:07 -07:00
Leo Franchi 5ef7c4e6db Add a management command for active user stats
(imported from commit a4227858b422c48e272700880e0c21889c7ce566)
2013-05-01 11:17:18 -04:00
Tim Abbott 57c5ea365a Enable the left sidebar for the MIT realm.
(imported from commit d69eba7fd7a95dd88892706c3d36485c71831864)
2013-04-30 18:38:13 -04:00
Tim Abbott 203fc55a7c Sort the narrow list better when there are more than 40 streams.
This shouldn't have any effect in normal realms, but for realms like
mit.edu that have large numbers of inactive streams, it will sort all
the streams that have had a recent message at the top (aka those that
aren't effectively inactive).

(imported from commit 027ce258d04b6fd58705e49f769dec7e0639bb38)
2013-04-30 18:38:13 -04:00
Steve Howell 30f825ebfe Handle bad api_key for jira integration properly.
(imported from commit e6063431e81434faa5f32ac8f91a08f86bd46997)
2013-04-30 18:26:34 -04:00
Steve Howell 0d84d0858a Update docs for jira integration (configurable streams).
(imported from commit cd07ac89f31bc3095f1262d02881a7c78a572c87)
2013-04-30 18:14:11 -04:00
Steve Howell fbad47ec28 api_key is url parm for jira webhook
(imported from commit 24624a9fcd7e6fdc15d23c2874a04e1465c3f3cf)
2013-04-30 18:14:11 -04:00
Luke Faraone 4491f7b1d8 Don't show a name of "None" if not using Google Apps authentication.
Previously we used the value of gafyd_name in forms, which means that if
it was undefined we'd end up with a string literal "None" being passed to
the registration form by the confirmation redirector.

(imported from commit c8fbb749bb793c8e927e86603ce196bf810f3f6a)
2013-04-30 14:15:52 -07:00
Tim Abbott f36d51edeb Update Zephyr Mirroring liveness check for new REST API.
(imported from commit d968fde21bd90510ea7bb7f85ecb9b97b41689f7)
2013-04-30 14:28:38 -04:00
Steve Howell 4450bbcbf8 @csrf_exempt decorator for api_jira_webhook
(imported from commit 6c4ef97b2312b6721bd50605efdf51f7affb514c)
2013-04-30 13:24:47 -04:00
Steve Howell 23f0d0bcb7 support stream parameter for jira to override default stream
(imported from commit b3be7b837f326968f9742c25ba04bdaaf6476b75)
2013-04-30 13:24:47 -04:00
Tim Abbott e9ea54b9ec Improve the zephyr mirror instructions.
(imported from commit 2570672c4184e5c2b6049e370ebc46ec2b59f0b5)
2013-04-30 11:54:16 -04:00
Tim Abbott 5b51705451 Fix logging of requesting user with REST API.
Fixes #1155.

(imported from commit b5becb7418ce9577a6bbaa20dcb68a02f1928b9f)
2013-04-30 11:54:16 -04:00
Zev Benjamin c08a86aeb9 Use server-highlighted subject and content when narrowed to a search
(imported from commit 0579193da040db77f9c7937d3714cb9ffeaf7ed8)
2013-04-30 11:40:27 -04:00
Zev Benjamin fca8f84c14 [schema] Return highlighted subject and content from get_old_messages() when doing a search
We HTML-escape the subject in Postgres to avoid a server round-trip.
Unlike the rendered_content, which is already escaped and cached on
zephyr_message, we normally escape subjects client-side.  Escaping in
Django would require fetching the messages that match the query,
escaping the subjects, and then making a second query to Postgres to
insert the markup.  We could instead fetch the messages with subjects
marked up using non-HTML (some unique string) that is later converted
into the correct markup either in Django or client-side, but then the
escaping problem would just be with some random string instead of
HTML.  Since the function is pretty simple, doing the escaping in
Postgres itself is the least painful option.

(imported from commit 004931d8e496697c18650aee97b1a74c55a04cb2)
2013-04-30 11:40:27 -04:00
Tim Abbott 1986b65c6a Enable historical messages for customer33.invalid.
(imported from commit ed95813f20ba29b425be4d90d6a54beb22ec81ad)
2013-04-30 11:26:20 -04:00
Tim Abbott e3bb1bc8ec bugdown: Fix tweet ID extraction from twitter urls.
(imported from commit 88b9882527a5317bf30bcc5f0d1255e819ea149c)
2013-04-30 10:43:17 -04:00
Leo Franchi 52f6c720d9 Add new stats server to logging
(imported from commit b3647ab039c902d09a92082c3e98b5b066e6a5c8)
2013-04-29 16:44:41 -04:00
Zev Benjamin 1c761f7ae3 [schema] [manual] Rebuild search_tsvector and change its update trigger
In addition to changing the trigger that updates
zephyr_message.search_tsvector to use our new text search
configuration, it also now builds the tsvector on rendered_content
instead of content and fires on update of only the subject or
rendered_content columns.

This migration is expected to take a long time.  The
checkpoint_segments parameter in postgresql.conf should be
temporarily raised (probably to 32) while it is running.

(imported from commit 4535438bb33ce1db2a74ecbe91efc52afdb568f1)
2013-04-29 13:58:20 -04:00
Zev Benjamin 8f17f99de2 Construct ts_queries using the new Humbug search configuration
(imported from commit 813ae86e9ea5f8af3ec2abd7d506cd707e699cdf)
2013-04-27 20:06:26 -04:00
Zev Benjamin 2aadf6fc6e [schema] [manual] Create a Postgres text search configuration for use with Humbug
Text search was not that great partially because Postgres wasn't
using a ispell dictionary (Postgres term) before.  We now pull in
Hunspell and use its dictionary and affix rules.

It is Ok to run with this new configuration before updating our full
text column and index that will be coming in the next few commits.

Manual steps for deploy:
1) On both postgres0 and postgres1 (both before moving on to step 2),
   install the hunspell-en-us package
2) On staging, run migration 0022
3) On both postgres0 and postgres1, copy the appropriate postgresql.conf
   file over
4) On both postgres0 and postgres1, run `pg_ctlcluster 9.1 main reload`

(imported from commit 706bf0f6ecc46c712cea10b73c34fd9d1dfd4767)
2013-04-27 20:06:26 -04:00
Leo Franchi 3d18b2eb2f Change statsd IP to new server
(imported from commit ae2ff964fd20ec89cc25bc0bcb58d94947ce462b)
2013-04-26 17:47:00 -04:00
Leo Franchi 5c0cfc44e7 Add iptables rule for statsd
(imported from commit 5311be29fd63151fb9d5a5c0f80ed34f8e8b76f5)
2013-04-26 17:47:00 -04:00
Zev Benjamin 8b3e800bf5 Use the filter on MessageLists when adding messages
(imported from commit c9070e5413352aef612b4763c35a4c72f4ecb852)
2013-04-26 17:45:25 -04:00
Zev Benjamin 8474675076 Attach Filters to MessageLists
This should allow us to stop special-casing the narrowed message list
as much.

(imported from commit 1eb7216fbd8aa16b796c239a189d6ce0008344f9)
2013-04-26 17:45:25 -04:00
Zev Benjamin 4f4e982ed1 Don't do narrows with search operators on the client
(imported from commit 98e6a06f5c10a384e6295b3281d23a061ecdecab)
2013-04-26 17:45:25 -04:00
Zev Benjamin aeea631bd2 Add JSON query for checking which of a set of messages are in a narrow
(imported from commit b1320cf0e1404d6b0f3dbf3a5b32b29287c698d7)
2013-04-26 17:45:22 -04:00
Zev Benjamin 3b8b8a7d4e Fix 'in' search operator
(imported from commit b42c9ef8d00c92ea117f124b933791478098c5b7)
2013-04-26 17:39:08 -04:00
Zev Benjamin c5c0a2ab45 Canonicalize Filter operators early
(imported from commit c251fab94970c127708c1b40b25e57d4afeb7ce9)
2013-04-26 17:39:08 -04:00
Zev Benjamin 0c15db4aff First steps towards making narrow.js more object-oriented
There's still a lot to do here.  For example, the external code
should probably go through the new Filter object directly instead of
indirectly through the narrow module.

(imported from commit 22dcd31cdebd51453f1658af52a4432b2fe7a4cb)
2013-04-26 17:39:08 -04:00
Zev Benjamin 6cdc3f67df Only fetch an extra message in get_old_messages if a narrow isn't specified
In the case where we're getting old messages for a narrowed view, the
anchor message id might not actually be in the result set so there's
no reason to fetch an extra message.

(imported from commit e610d1f2cb95be3ff9fce6dc95e40c560bc5bf84)
2013-04-26 17:39:08 -04:00
Allen Rabinovich 6970f2deee Corrected the removal code for temporary copy div
(imported from commit 9e3963a3ce4e7039464ab3c8de5939361d7802e5)
2013-04-25 15:11:56 -07:00
Jeff Arnold 88348a8dc7 Prevent XSS vulnerability by avoiding treating message text as HTML
Addresses #1205

(imported from commit 229c65ae48d509bf3b71ed7cbfcc1fbeb60d14c5)
2013-04-25 17:33:04 -04:00
Allen Rabinovich 417299bdfd Modified the "notvisible" CSS class to account for edge cases.
In particular, I added absolute positioning and hidden overflow,
which ensures that if an element has a persistent min-width
(like a file input field apparently does), it doesn't affect its
parent.

(imported from commit 72e7a5bee2775fb6f229899ba849292eee76aa4a)
2013-04-25 14:30:56 -07:00
Tim Abbott 8760c1c7be get_display_recipient: Use an in-memory cache as well as memcached.
(imported from commit 825cb77b29cd635add252e4b9497feeb7ed7e177)
2013-04-25 17:02:20 -04:00
Tim Abbott 7c001822f2 Use bulk requests for updating memcached in get_old_messages.
Otherwise we end up doing 1000 requests to memcached, which can be
quite expensive.

(imported from commit be247f63b5fb88c6f4a45326261b66ea67fe1028)
2013-04-25 14:43:37 -04:00
Zev Benjamin 3e1ec5d7c9 Increase size of initial message cache fetch and exclude tabbott/extra's messages
(imported from commit 59544aa3adfb05f50ca69e56f37f57944dfa0b81)
2013-04-25 13:33:51 -04:00
Zev Benjamin a1634b12d3 Increase efficiency of initial message cache query
In repeated trials, the initial data fetch used to take about 1100ms.
In practice, it was often taking >2000ms, probably due to caching
effects.  This commit cuts the time down to about 300ms in repeated
trials.

Note that the semantics are changed slightly in that we may no longer
get exactly 25000 messages.  However, holes in the message_id
sequence are currently very rare or non-existent so this shouldn't be
a problem and we don't care about the exact number of messages
anyway.

I believe the problem was that the query planner was unable to
effectively use the LIMIT clause to figure out that only a small
subset of zephyr_message was going to be needed.  Thus, it planned
for operating on the entire table and decided it could not use a more
efficient plan because work_mem, although large, would not be large
enough to execute the query over all of zephyr_message.

The original query was:

SELECT "zephyr_message"."id", "zephyr_message"."sender_id", "zephyr_message"."recipient_id", "zephyr_message"."subject", "zephyr_message"."content", "zephyr_message"."rendered_content", "zephyr_message"."rendered_content_version", "zephyr_message"."pub_date", "zephyr_message"."sending_client_id", "zephyr_userprofile"."id", "zephyr_userprofile"."password", "zephyr_userprofile"."last_login", "zephyr_userprofile"."email", "zephyr_userprofile"."is_staff", "zephyr_userprofile"."is_active", "zephyr_userprofile"."date_joined", "zephyr_userprofile"."full_name", "zephyr_userprofile"."short_name", "zephyr_userprofile"."pointer", "zephyr_userprofile"."last_pointer_updater", "zephyr_userprofile"."realm_id", "zephyr_userprofile"."api_key", "zephyr_userprofile"."enable_desktop_notifications", "zephyr_userprofile"."enter_sends", "zephyr_userprofile"."tutorial_status", "zephyr_realm"."id", "zephyr_realm"."domain", "zephyr_realm"."restricted_to_domain", "zephyr_recipient"."id", "zephyr_recipient"."type_id", "zephyr_recipient"."type", "zephyr_client"."id", "zephyr_client"."name" FROM "zephyr_message" INNER JOIN "zephyr_userprofile" ON ( "zephyr_message"."sender_id" = "zephyr_userprofile"."id" ) INNER JOIN "zephyr_realm" ON ( "zephyr_userprofile"."realm_id" = "zephyr_realm"."id" ) INNER JOIN "zephyr_recipient" ON ( "zephyr_message"."recipient_id" = "zephyr_recipient"."id" ) INNER JOIN "zephyr_client" ON ( "zephyr_message"."sending_client_id" = "zephyr_client"."id" ) ORDER BY "zephyr_message"."id" DESC LIMIT 25000;

with query plan:
 Limit  (cost=0.00..27120.95 rows=25000 width=362) (actual time=0.051..1121.282 rows=25000 loops=1)
   ->  Nested Loop  (cost=0.00..5330872.99 rows=4913981 width=362) (actual time=0.048..1081.014 rows=25000 loops=1)
         ->  Nested Loop  (cost=0.00..3932643.31 rows=4913981 width=344) (actual time=0.042..926.398 rows=25000 loops=1)
               ->  Nested Loop  (cost=0.00..2550275.29 rows=4913981 width=334) (actual time=0.035..752.524 rows=25000 loops=1)
                     Join Filter: (zephyr_message.sending_client_id = zephyr_client.id)
                     ->  Nested Loop  (cost=0.00..1739467.29 rows=4913981 width=320) (actual time=0.024..217.348 rows=25000 loops=1)
                           ->  Index Scan Backward using zephyr_message_pkey on zephyr_message  (cost=0.00..362510.09 rows=4913981 width=156) (actual time=0.014..42.097 rows=25000 loops=1)
                           ->  Index Scan using zephyr_userprofile_pkey on zephyr_userprofile  (cost=0.00..0.27 rows=1 width=164) (actual time=0.003..0.004 rows=1 loops=25000)
                                 Index Cond: (id = zephyr_message.sender_id)
                     ->  Materialize  (cost=0.00..1.17 rows=11 width=14) (actual time=0.001..0.010 rows=11 loops=25000)
                           ->  Seq Scan on zephyr_client  (cost=0.00..1.11 rows=11 width=14) (actual time=0.002..0.010 rows=11 loops=1)
               ->  Index Scan using zephyr_recipient_pkey on zephyr_recipient  (cost=0.00..0.27 rows=1 width=10) (actual time=0.002..0.003 rows=1 loops=25000)
                     Index Cond: (id = zephyr_message.recipient_id)
         ->  Index Scan using zephyr_realm_pkey on zephyr_realm  (cost=0.00..0.27 rows=1 width=18) (actual time=0.002..0.003 rows=1 loops=25000)
               Index Cond: (id = zephyr_userprofile.realm_id)
 Total runtime: 1141.408 ms

In the new code, we do two queries:

SELECT "zephyr_message"."id" FROM "zephyr_message" ORDER BY "zephyr_message"."id" DESC LIMIT 1

followed by:

SELECT "zephyr_message"."id", "zephyr_message"."sender_id", "zephyr_message"."recipient_id", "zephyr_message"."subject", "zephyr_message"."content", "zephyr_message"."rendered_content", "zephyr_message"."rendered_content_version", "zephyr_message"."pub_date", "zephyr_message"."sending_client_id", "zephyr_userprofile"."id", "zephyr_userprofile"."password", "zephyr_userprofile"."last_login", "zephyr_userprofile"."email", "zephyr_userprofile"."is_staff", "zephyr_userprofile"."is_active", "zephyr_userprofile"."date_joined", "zephyr_userprofile"."full_name", "zephyr_userprofile"."short_name", "zephyr_userprofile"."pointer", "zephyr_userprofile"."last_pointer_updater", "zephyr_userprofile"."realm_id", "zephyr_userprofile"."api_key", "zephyr_userprofile"."enable_desktop_notifications", "zephyr_userprofile"."enter_sends", "zephyr_userprofile"."tutorial_status", "zephyr_realm"."id", "zephyr_realm"."domain", "zephyr_realm"."restricted_to_domain", "zephyr_recipient"."id", "zephyr_recipient"."type_id", "zephyr_recipient"."type", "zephyr_client"."id", "zephyr_client"."name" FROM "zephyr_message" INNER JOIN "zephyr_userprofile" ON ( "zephyr_message"."sender_id" = "zephyr_userprofile"."id" ) INNER JOIN "zephyr_realm" ON ( "zephyr_userprofile"."realm_id" = "zephyr_realm"."id" ) INNER JOIN "zephyr_recipient" ON ( "zephyr_message"."recipient_id" = "zephyr_recipient"."id" ) INNER JOIN "zephyr_client" ON ( "zephyr_message"."sending_client_id" = "zephyr_client"."id" ) WHERE "zephyr_message"."id" > 4941883

with the message id filled in as the result of the first query.  The
new query differs from the original only in that its ORDER BY and
LIMIT clauses are replaced by a WHERE clause.  The second query has
query plan:

 Hash Join  (cost=709.30..28048.18 rows=20544 width=365) (actual time=41.678..279.261 rows=25041 loops=1)
   Hash Cond: (zephyr_message.recipient_id = zephyr_recipient.id)
   ->  Hash Join  (cost=102.98..27056.66 rows=20544 width=355) (actual time=3.686..190.730 rows=25041 loops=1)
         Hash Cond: (zephyr_message.sending_client_id = zephyr_client.id)
         ->  Hash Join  (cost=101.73..26772.94 rows=20544 width=341) (actual time=3.649..143.695 rows=25041 loops=1)
               Hash Cond: (zephyr_userprofile.realm_id = zephyr_realm.id)
               ->  Hash Join  (cost=99.99..26488.71 rows=20544 width=323) (actual time=3.578..96.746 rows=25041 loops=1)
                     Hash Cond: (zephyr_message.sender_id = zephyr_userprofile.id)
                     ->  Index Scan using zephyr_message_pkey on zephyr_message  (cost=0.00..26106.24 rows=20544 width=159) (actual time=0.017..41.980 rows=25041 loops=1)
                           Index Cond: (id > 4941883)
                     ->  Hash  (cost=83.33..83.33 rows=1333 width=164) (actual time=3.548..3.548 rows=1333 loops=1)
                           Buckets: 1024  Batches: 1  Memory Usage: 275kB
                           ->  Seq Scan on zephyr_userprofile  (cost=0.00..83.33 rows=1333 width=164) (actual time=0.006..1.646 rows=1333 loops=1)
               ->  Hash  (cost=1.33..1.33 rows=33 width=18) (actual time=0.064..0.064 rows=33 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 2kB
                     ->  Seq Scan on zephyr_realm  (cost=0.00..1.33 rows=33 width=18) (actual time=0.003..0.033 rows=33 loops=1)
         ->  Hash  (cost=1.11..1.11 rows=11 width=14) (actual time=0.027..0.027 rows=11 loops=1)
               Buckets: 1024  Batches: 1  Memory Usage: 1kB
               ->  Seq Scan on zephyr_client  (cost=0.00..1.11 rows=11 width=14) (actual time=0.003..0.013 rows=11 loops=1)
   ->  Hash  (cost=335.03..335.03 rows=21703 width=10) (actual time=37.974..37.974 rows=21761 loops=1)
         Buckets: 4096  Batches: 1  Memory Usage: 893kB
         ->  Seq Scan on zephyr_recipient  (cost=0.00..335.03 rows=21703 width=10) (actual time=0.004..18.443 rows=21761 loops=1)
 Total runtime: 299.300 ms

(imported from commit b2a70cccc47be7970df407c6be00eccd2e8be82a)
2013-04-25 13:25:15 -04:00