We were trying to default the users first name when using google auth,
but it was getting lost when rendering the form.
(imported from commit 710e0c2ce591488920458dca74209c75e7031abd)
This change will redirect armooo@dropbox.com from stage to prod. It also
removes the prod to stage redirect for all users. This will be rolled
out in two commits to prevent a redirect loop.
(imported from commit c290b630e746f757429b8bbdadbe7768367a5e33)
Resolves a AmbiguousTimeError.
Approved by Leo.
This reverts commit ebfaeb97ffda22b618be7a9206877f9d2ec53404.
(imported from commit 42b29c6c57eb954952a740bc89611031cef1834a)
We were serving 401s on /user_uploads when the user wasn't authenticated (due to
it being a REST endpoint). This was causing a login popup to display instead of
just a broken image preview.
(imported from commit 62640f5bd59eb3b86ab5aae5923ccfa742459805)
Missed message email were including the context messages in the number
of messages you were mentioned in.
(imported from commit 1749c5d272d2e17d6e28456ace932f80715103a3)
* Fixes a few bugs with missed message address for PMs and huddles.
* Uses missed message address for all missed message reply-to headers on
the zulip.com realm.
(imported from commit 61dd09386e1bbdf9a5096e2400984d31e73a5b74)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
Send a different missed message email for each recipient. This allows us
to set a different reply to address for each one. PMs and huddles use
the existing logic, replies will be sent to all parties via email.
Missed @-mention emails will have the reply to address set to the
stream's email address.
(imported from commit bfb7cf7c1382adbf3720caa74cbb927c10dea267)
We were expecting Github to send us the string "true" when the exclude_* options
were set. However, we were actually getting "1" when an option was set and the
empty string when unset. So we were always setting the options to False.
(imported from commit 067ba60b0b0404aebc6eda9487b1201fc2764243)
One common place that this happens (for us) is on a local
Dropbox .dev.corp.dropbox.com instance, which can't be reached
by the Zulip servers.
This commit also:
* Fixes the test suite
* Properly previews /photos/ links
(imported from commit b4788b6236e7a9d390e1efc4673be34d9ba5e091)
After I reverted the change to the bot stetings page, this
broke a test. This commit fixes that.
(imported from commit 394b29fea4f75096f7cb8d819145a9adc386276b)
This can be used by mirroring scripts to only forward messages to users
who do not zulip accounts.
(imported from commit 200d6bcaaf39238bfb01480a9e906d567d4d9e11)
Truthfully, the actual way to do this is going to be a bit
more involved and also involves changing Realm.NOTIFICATION_STREAM_NAME,
probably on a realm-by-realm basis.
(imported from commit b6a05849d215e07ee6716d116ff5e2c819d5b4be)
Known issues:
* No support for whitelabeling in the email
* No whitelabeling for any externally-visible branding
(imported from commit 9eab7b0744e56a87007b8621a8bb18bbb1080256)
When you are at mentioned in a stream we will now send you up to the
last five messages which were sent in the past 5 minutes on the same
topic and stream.
(imported from commit 6df6c1cf868722a7bf76e54710e38741a7ac8f31)
Activation emails were using django's sites framework which always has
the domain set to zulip.com.
(imported from commit b81eae96e1a75b64dd93970760b869f3271ce88c)
The default today is to not have issues traffic except on a whitelist. This is despite the fact that we have
an exclude_issues boolean on Github's Zulip-integration page, since if we changed the default, all realms
currently using this default would have to go make this change on every repo. That's something that would require
some work, in terms of communicating with them about this, and logging integrations settings for all realms, to
see which are correctly setting exclude issues. Unfortunately this probably isn't high priority today, but let's
try to get this whitelist change out to prod ASAP.
(imported from commit 256fe32bb6aaf7de18ff02d8d7e204a13bc02b7a)
Display a red warning box to get users to direct users to staging for
the zulip.com (dropbox) realm.
(imported from commit 01ad4209d9247406bc82f5dedaf21371101a1d84)
Apply this commit after hours!
To apply this commit, first run the migration and then run the following as the
zulip user on staging:
$ echo 'VACUUM zerver_message' | python manage.py dbshell
The above VACUUM is needed to clean out the existing fast update pending list.
It might take a long time and block new message inserts!
See discussion near Zulip message 18377486 for why we're turning off the fast
update mechanism for zephyr_message_search_tsvector.
The high level overview is:
As a consequence of the high work_mem setting on our postgres server, the
fastupdate pending list for zephyr_message_search_tsvector can grow very large.
This leads to the occasional INSERT or UPDATE taking inordinately long (many
minutes) as the pending list is flushed, blocking other inserts.
One other possible solution for preventing the list from growing too large is to
set the autovacuum storage parameters on the table such that the autovacuum
process will run after a reasonable number of INSERTs or UPDATEs. However, the
table is mostly INSERT-only. Therefore, only the autovacuum_analyze_*
parameters will actually do anything to affect when the autovacuumer will run,
but when it does, it will do a VACUUM ANALYZE instead of a plain VACUUM. We
don't particularly need the table to be re-analyzed that often.
Turning off fast update will eventually cause the index to become less
efficient, but we can always rebuild it later if we notice it starting to get
too slow.
(imported from commit f280c193c3bc0a3f312960510c5a7dcf97f30c3d)
URLs with a realm of "unk" will be queried against the new bucket to
determine the relevant realm of the uploading user.
(imported from commit 5d39801951face3cc33c46a61246ba434862a808)
Otherwise the user_profile.backend attribute doesn't get set. I didn't notice
this previously because on first register authenticate() gets called, and then
the UserProfile object gets cached. This means that subsequent logins work just
fine as long as the UserProfile object is in memcached.
(imported from commit 834d95c46aa07724ea84802f09b7249de99b5ca8)
CUSTOMER16 wants their employee realm to:
* only use JWT logins
* have name changes be disabled (they want users' full names to be the
their CUSTOMER16 user name).
* not show the suggestion that users download the desktop app
(imported from commit cb5f72c993ddc26132ce50165bb68c3000276de0)
We currently expect the use of HMAC SHA-256, although there shouldn't be
anything preventing us from using other algorithms.
(imported from commit 354510a0b7e9e273d062a1ab5b2b03d4a749d6a3)
When the date changes between an existing group and a new group the
existing date separator needs to be updated. This is done by rerendering
the existing group.
(imported from commit a3775815e33872b0ec07704dc7ccf5fd2671fa21)
Now that we are not directly using message in the message list view
rename the uses of message that are message_containers.
(imported from commit 5c355703a8934a74864f5de6ecb1e2fd851e5d41)
The messages being passed to the handlebars templates were global
messages which we were adding per list details to, show name bar etc.
This causes rendering bugs when you try to rerender a message, because a
different list may have changed it. This commit moves the global message
data to a msg attribute on the message_container which will contain the
per list attributes.
(imported from commit 26b1f0d2c72d6288a6d3e7ed5f8692426f2a97ad)
This way if two browsers are disagreeing about your active status, the
active one wins. The active browser continues to update your timestamp,
and the idle browser's changes are discarded until the timestamp on your
active status expires.
(imported from commit dc29e013d045c4b72793097f611ba6802c58e57a)
The goal is to have a more data centric piece that can be unit tested.
We also try to minimise the number of one off jQuery DOM updates and
rerender handlebars fragments instead. This will prevent the
message_group and DOM from drifting apart and not being able to rerender
correctly.
(imported from commit 03f09803f2bc0c3b8187f76f2cfe90be9f7512a3)
Previously, you'd have to be offline to recieve missedmessage
notifications, or maybe idle for an hour. However, I'm pretty sure the
latter code didn't actually work, so we scrap that and just nofity you
via email or push as soon as you're idle.
Closes trac #2350
(imported from commit 899966e0514db575b9640a96865639201824b579)
Before this change, we were incorrectly trying to do local
filtering on negated has searches.
(imported from commit d1a6f1feef6b3cc1c984eb91a73cd16c4e66874e)
We still don't show this in the frontend, aside from our usual "Not
delivered" message that we also show when you send to a non-existent
user.
Addresses #2349
(imported from commit 2f348b15a4d539987ddbcccbbf40e2be87c1f92d)
We show a user as "on mobile" if:
* They are only active on mobile
* They are inactive on all devices and can receive push notifications
(imported from commit 0510b9371727cd19c72f6990df7112921c36ad48)
This doesn't affect code when not in testing. It shaves 7 seconds off of casper
test time on my machine.
(imported from commit 7e27fa781bcf16f36d9c8f058427ba57c41068bd)
Normally, casper delays checking the waitFor condition for 100 milliseconds and
further does not act on that check for another 100 milliseconds. This is just
silly.
(imported from commit ad046ceda81abda5c609ce25ef0d4fb27d3da716)
send_message -> then_send_message
send_many -> then_send_many
wait_and_send -> then_wait_and_send
Hopefully this makes it clearer that they should not be called inside of steps.
(imported from commit 4fcc971817b25056100311ba55303da2c5527f0f)
Casper was calling casper.then(then) instead of calling the callback directly.
This meant that the callback was being added as a step, which worked, but was
not consistent with the rest of the casper model.
(imported from commit b3bf916f7c56dd3d4e7be3569ebdf9d3045cd085)
This will make it slightly easier to consume the data from our clients.
Ref:
RFC 6585 §4
(imported from commit 6d323dc25db78a6d84a163add950f039e03e73d3)
In a test run with a hand-constructed query, this sped up the query time from
280ms to 50ms.
(imported from commit 8cbe199ca50a487491d13d6d6ef940ea668c1038)
We can't just check that the realms are the same because ist.mit.edu is an open
realm and uses @mit.edu email addresses.
(imported from commit 7dbaa81cea6e4f82563dfc0cfe67a61fe9378911)
See #2357. We now support `~~~ .py ` with that trailing space.
Note that the test coverage is Python-side only due to
bugdown_matches_marked being set to false, since we don't yet
support language syntax on the client side.
(imported from commit ccd5fcb0eee01478d349161400103480678d7486)
Previously, if you searched for "in:home search:foo", we
weren't making "in:home" a public operator, so the back end
wouldn't know to exclude muted messages, but the front end
also wouldn't exclude muted messages, because it assumed
that queries with "search:" in them were fully narrowed by
the back end.
Prior commits made it so that the back end is now capable
of doing "in:home" narrowing, so to get the properly narrowed
results, we simply needed to make in:home be a public operator
in this commit. We also made in:all be public for convenience,
although it's essentially a no-op.
(imported from commit e4a8b10813b50163c431b1721bd316b676be1b83)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
Allow bot owners to set which streams their will receive events for
without needing to change a configuration file.
(imported from commit 2b69e519dbc12ffbdba072031a7f7196c9e50e33)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
This commit finishes up support for has:* searches by adding
the front-end pieces, specifically the part that "has" operators
will not be applied locally. It also implements basic
descriptions for search suggestions and canonicalization
of operands from plural to singular.
(imported from commit a3285bc33d06d76b5a2b403ebcdd911b4cc03980)
Typing "stream:foo -topic:b" leads to "stream:foo -topic:bar" properly
as a suggestion now.
(imported from commit bb0acf52744f7b13977a3db5d3c130d1402b09b7)
Github flags pushes as either `forced` or not. However, it always marks new branches as
forced pushes--but we don't necessarily agree with them. This commit checks for the `created`
flag as well.
This resolves Trac #2346
(imported from commit 960bd3ad707a4d1ad431e21dcd79389e8d4b297b)
The match_subject and match_content template vars are notorious
for causing bugs due to the way handlebars forces the strange
../../.. syntax on us, so now we have some test coverage.
(imported from commit c6b151b964ae8b6fb199d9cdbe533a87c6b58947)
Testing directly against NarrowBuilder is convenient, as it
requires very minimal data setup to get a basic sanity check
of the SQL that gets generated.
(imported from commit 5f3bb0364713bd2e4228a9b9d4d16bde297b4e16)
Before saving a Message object, call update_calculated_fields()
to set the has_attachment/has_image/has_link fields.
Note that the pre_save hook we added here does not get called
if you call bulk_create, hence the explicit call to
update_calculated_fields() in do_send_messages().
(imported from commit 1d60ae5908ef186aa5ff1e39277dbb2b765e60d4)
A stream is vacant when it has no subscribers and occupied when it has at least
one subscriber.
We have a slightly odd model where stream creation is conflated with
subscription creation. Streams are created by attempting to subscribe to a
stream that doesn't exist. We also hide streams with no subscribers from users
to make it seem like they've gone away. However, we can't actually remove those
streams because we want to preserve history.
This commit moves us towards a separation of these two concepts. By sending
events for stream creation, occupation, vacancy, and deletion, we allow clients
to directly observe the global state of streams rather than indirectly observing
subscription information. A more complete solution would involve adding a view
for explicitly creating streams without subscribing to them.
This commit does not handle the intricacies of invite-only streams. We
currently simply do not send these events for invite-only streams.
(imported from commit 5430e5a5eecefafcdba4f5d4f9aa665556fcc559)