Prior to adding reply-to-missed-message-email functionality, adding
automated tests for simpler case - incoming stream messages. Added
to new file test_email_mirror.py.
Also removed the "if not body" code from process_stream_message that
will never run because of an upstream ZulipEmailForwardError exception.
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
The do_send_missedmessage_events_reply_in_zulip function in the email
mirror didn't support EMAIL_GATEWAY_PATTERN that wasn't of the form
%s@example.com (which resulted in replies to missed message emails failing
to be parsed).
This also requires updating the required version of oauthlib; previously an
appropriate version was being installed only because it was a dependency of
the wrong twitter library.
This only affects development environments and/or hand-built
installations relying on the contents of requirements.txt.
To fix existing environments, the incorrect api needs to be explicitly
removed with `pip uninstall twitter`.
Fixes#86.
This reverts commit 39f2908a32c0276b1d87ecedc876c71dd35a9b2f.
We're not including the preview_fail.png image in the release.
(imported from commit 2de1451de2f9b1727fc3a7e64c380b71c0f2caa8)
This also removes the convenient way to run statsd in the Dev VM,
because we don't anticipate anyone doing that. It's just 2 lines of
config to configure it anyway:
STATSD_HOST = 'localhost'
STATSD_PREFIX = 'user'
(imported from commit 5b09422ee0e956bc7f336dd1e575634380b8bfa2)
django commit 596564e80808 stores the user id in the session as a
string, which broke our code that extracts the user id and compares
it to the id of a UserProfile object.
(imported from commit 99defd7fea96553550fa19e0b2f3e91a1baac123)
We can add it back later but for now we can just stick with localhost
since that's what most people will want.
(imported from commit c5fe524282219dc62a0670f569c0cb6af04be339)
Fixes
[
File "/srv/zulip/zerver/lib/actions.py", line 605, in recipient_for_emails
if not (normalized_emails & admin_realm_admin_emails or normalized_emails & settings.CROSS_REALM_BOT_EMAILS):
TypeError: unsupported operand type(s) for &: 'set' and 'list'
(imported from commit f39a95dad7b3207e9188fc03926cd116061ef3f3)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Also increase the number of messages sent as context from 5 to 10 and
look up to 15 minutes in to the past for context.
(imported from commit bfaed9bcff1ee2047fc3b7a63acf93cd2d47cc7d)
Now we have 2 different Zulip apps out there, and they are signed with
two certs: Zulip and Dropbox. The Dropbox-signed apps are going to need
to be sent APNS notifications from the appropriate APNS connection
(imported from commit 6db50c5811847db4f08e5c997c7bbb4b46cfc462)
Missed message email were including the context messages in the number
of messages you were mentioned in.
(imported from commit 1749c5d272d2e17d6e28456ace932f80715103a3)
* Fixes a few bugs with missed message address for PMs and huddles.
* Uses missed message address for all missed message reply-to headers on
the zulip.com realm.
(imported from commit 61dd09386e1bbdf9a5096e2400984d31e73a5b74)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
Send a different missed message email for each recipient. This allows us
to set a different reply to address for each one. PMs and huddles use
the existing logic, replies will be sent to all parties via email.
Missed @-mention emails will have the reply to address set to the
stream's email address.
(imported from commit bfb7cf7c1382adbf3720caa74cbb927c10dea267)
One common place that this happens (for us) is on a local
Dropbox .dev.corp.dropbox.com instance, which can't be reached
by the Zulip servers.
This commit also:
* Fixes the test suite
* Properly previews /photos/ links
(imported from commit b4788b6236e7a9d390e1efc4673be34d9ba5e091)
Truthfully, the actual way to do this is going to be a bit
more involved and also involves changing Realm.NOTIFICATION_STREAM_NAME,
probably on a realm-by-realm basis.
(imported from commit b6a05849d215e07ee6716d116ff5e2c819d5b4be)
When you are at mentioned in a stream we will now send you up to the
last five messages which were sent in the past 5 minutes on the same
topic and stream.
(imported from commit 6df6c1cf868722a7bf76e54710e38741a7ac8f31)
URLs with a realm of "unk" will be queried against the new bucket to
determine the relevant realm of the uploading user.
(imported from commit 5d39801951face3cc33c46a61246ba434862a808)
This way if two browsers are disagreeing about your active status, the
active one wins. The active browser continues to update your timestamp,
and the idle browser's changes are discarded until the timestamp on your
active status expires.
(imported from commit dc29e013d045c4b72793097f611ba6802c58e57a)
We still don't show this in the frontend, aside from our usual "Not
delivered" message that we also show when you send to a non-existent
user.
Addresses #2349
(imported from commit 2f348b15a4d539987ddbcccbbf40e2be87c1f92d)
In a test run with a hand-constructed query, this sped up the query time from
280ms to 50ms.
(imported from commit 8cbe199ca50a487491d13d6d6ef940ea668c1038)
See #2357. We now support `~~~ .py ` with that trailing space.
Note that the test coverage is Python-side only due to
bugdown_matches_marked being set to false, since we don't yet
support language syntax on the client side.
(imported from commit ccd5fcb0eee01478d349161400103480678d7486)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
Before saving a Message object, call update_calculated_fields()
to set the has_attachment/has_image/has_link fields.
Note that the pre_save hook we added here does not get called
if you call bulk_create, hence the explicit call to
update_calculated_fields() in do_send_messages().
(imported from commit 1d60ae5908ef186aa5ff1e39277dbb2b765e60d4)
A stream is vacant when it has no subscribers and occupied when it has at least
one subscriber.
We have a slightly odd model where stream creation is conflated with
subscription creation. Streams are created by attempting to subscribe to a
stream that doesn't exist. We also hide streams with no subscribers from users
to make it seem like they've gone away. However, we can't actually remove those
streams because we want to preserve history.
This commit moves us towards a separation of these two concepts. By sending
events for stream creation, occupation, vacancy, and deletion, we allow clients
to directly observe the global state of streams rather than indirectly observing
subscription information. A more complete solution would involve adding a view
for explicitly creating streams without subscribing to them.
This commit does not handle the intricacies of invite-only streams. We
currently simply do not send these events for invite-only streams.
(imported from commit 5430e5a5eecefafcdba4f5d4f9aa665556fcc559)
Previously, the email mirror queue worker used the API bindings to send
messages to Zulip, as if it were any other API client.
This is inefficient since we're running the worker inside the Django
context on a machine with database access; we can instead just use the
internal message-sending functions we use elsewhere. This also resolves
potential issues with SSL certificates, etc. that might occur when we
were previously making a HTTPS connection.
(imported from commit 6de8015829bec440f1af0199a2138828e86ed2a4)
Previously, digest emails provided links to Zulip that didn't correctly
encode "/" if it occurred in a stream name or topic. By explicitly
specifying «safe=""», we can request that urllib.quote escape such
slashes.
Closes trac #2294.
(imported from commit 2e6334672969d4cf4032d2ea5dc80091af96d672)
We now allow the list of recipients to be sent as a
comma-delimited string with optional JSON encoding.
(imported from commit e928b037bbd258348eb5b2ecca486d0bb77f593e)
We now will match an alert word even if it is used at the boundry of
bolding, backtick escaping, or caret quoting.
Closes trac #2186.
(imported from commit 984bc63eb621772c95a01ca5c5bfeb190767f71f)
Before we deploy this commit, we must migrate the data from the staging redis
server to the new, dedicated redis server. The steps for doing so are the
following:
* Remove the zulip::redis puppet class from staging's zulip.conf
* ssh once from staging to redis-staging.zulip.net so that the host key is known
* Create a tunnel from redis0.zulip.net to staging.zulip.net
* zulip@redis0:~$ ssh -N -L 127.0.0.1:6380:127.0.0.1:6379 -o ServerAliveInterval=30 -o ServerAliveCountMax=3 staging.zulip.net
* Set the redis instance on redis0.zulip.net to replicate the one on staging.zulip.net
* redis 127.0.0.1:6379> slaveof 127.0.0.1 6380
* Stop the app on staging
* Stop redis-server on staging
* Promote the redis server on redis0.zulip.net to a master
* redis 127.0.0.1:6379> slaveof no one
* Do a puppet apply at this commit on staging (this will bring up the tunnel to redis0)
* Deploy this commit to staging (start the app on staging)
* Kill the tunnel from redis0.zulip.net to staging.zulip.net
* Uninstall redis-server on staging
The steps for migrating prod will be the same modulo s/staging/prod0/.
(imported from commit 546d258883ac299d65e896710edd0974b6bd60f8)
Apparently the "inline" treeprocessor is what runs the inline
patterns. Also re-enable the rewriting-to-https support.
(imported from commit 2fde2c1f15217a784f26b16db25ee745f424f2f0)
Have the server send down the stream's id for removal
events, and have the client use that id to look up the
stream in its internal data structures. This sets the
stage for eventually just sending the stream id (and not
the stream name) down to clients, once all our clients
are ready to use the stream id.
(imported from commit 922516c98fb79ffad8ae7da0396646663ca54fd0)
Here, we don't want to check the uploading users' realm when determining
message privacy, because that'll prevent non-Zulip users from having
email-mirror-uploaded images. Instead, we just pass along the target
realm for the message explicitly to upload_message_image()
(imported from commit 6891261552135b1f41ff9da55ffe963ee5000556)
Our overall guideline is the type names for events are singular, and the list of
events of that type are plural. 'subscriptions' was not following this guideline
and (potentially as a result) had a bug where it was impossible for clients to explicitly
subscribe to subscription change events properly.
(imported from commit 7b3162141fd673746e0489199966c29ea32ee876)
For EventsRegisterTest that test updates to streams and
subscriptions, we now validate the events generated by
the actions under test conform to predicted schemas.
We define the schemas with help from the validators code
that is also sometimes used to validate incoming request
parameters for our views.
(imported from commit b4222b920a588e15cccee4a2349c074ca9697448)
This change also makes it so that the test_rename_stream()
test exercises the code path. We need to subscribe the user
to the stream in order to generate events.
(imported from commit 77f965efbf5a766eb8de23486e303fa135b2e638)
We now show the module name (e.g. "tests or test_hooks") in the
test output. This change also eliminates the intermediate use
of slashes in the test_name var, which was passed to
bounce_key_prefix_for_testing().
(imported from commit 58e73301037a0b07d7e437514c247f7cb559420e)
Instead of having home() set page_params.realm_name directly from
the user_profile object, have fetch_initial_state_data() set it.
This is more consistent with how we treat other data, and it protects
us against a race condition where realm name updates arrive during
the DB fetching.
(imported from commit 545e3bd73f150438126e3f941e9bebc7aa1d0614)
In particular, make the stream history inaccessible and free up the
name to be re-used.
(imported from commit 6063b7a484ed0ba0279a17d2b3e9a92b5ef1f762)
The file test_runner.py has our subclass of DjangoTestSuiteRunner
and various methods that help it work.
(imported from commit 8eca39a7ed3f8312c986224a810d4951559e7a8b)
The function update_user_profile_caches now operates on a list
of user_profiles, so callers like flush_realm() can benefit from
having a single cache_set_many() call. This slightly complicates
the call from flush_user_profile().
(imported from commit e064871d849b873c6ca388f00d4f7afaba1bf222)
For the realm-wide caches of active user dicts and alert words, just
make a single call to cache_delete() when you are deactivating a
realm. Before this change, we were doing O(N) cache_deletes as
part of the code path through flush_user_profile(). Now we just
call update_user_profile_caches() directly to clear the user_profile
caches.
This change also sets us up to turn flush_realm() into a post-save hook.
(imported from commit 699b4ea226ae15fc8c402cb4bc64ff6bdc041fc2)
This is a slight behavior change, as we now flush user_profile
caches for bots as well as humans.
(imported from commit 24c72c44d851ee4c66a67a4728cd6c548faeedcd)
This function updates all the user_profile-related caches
that are keyed on a per-user basis.
(This had some test coverage already.)
(imported from commit 37979400514a7b46a6dcb7e36665b0fee2f3c525)
Stream name and descriptions updates were being sent to all of the
active users on a realm. They are now only send to users who would have
information about that stream.
(imported from commit 2621ee8029f7356bf44ec493d7b5361bd546a8f5)
This is a lot simpler and eliminates a possible failure mode in the
data transfer path.
(imported from commit 19308d2715bbd12dc9385234f1d9156f91bdfae0)
To mutate the state for removing subscribers, the previous
code was essentially adding in event['subscriptions'] to
state['unsubscribed'], but that was a naive approach, since
the event object only has the name of the subscriber, and not
the full subscription info.
We instead effectively copy records over from state['subscriptions']
to state['unsubscribed'], and we also do surgery on the subscribers
that made me need to add the user_profile parameter to apply_events().
With the code apparently working now, I was able to remove the
match_except() test helper and use a more thorough matcher in
the test on do_remove_subscription().
Part of fixing the "remove" case was cleaning up the "add" case,
since they aren't quite symmetric opposites of each other, although
under this refactoring they now share the new name() helper.
(imported from commit 0deab67d0c7b08b3ad962493efae3762a835fd29)
Because full_name and is_admin changes go through many similar,
generic codepaths, it is almost more work at this point to keep
is_admin out of page_params as it is to just put it in. So
I put it in. This should pave the path for showing admins in
the GUI.
This commit actually starting by my adding a test
that calling apply_events() with the notification you get
when calling do_change_is_admin() updates
state['realm_users'] to be similar to what you would get
out of fetch_initial_state_data(). We didn't have test coverage
there before. Making that test pass forced my hand to
either add is_admin to page_params or to special-case
apply_events() not to update page_params with is_admin. I
chose the former approach.
(imported from commit 1e49d59c66540014284529c29d5007224be6a0c6)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
While we're at it, lets comment up the function so I know what this is
doing next time :)
(imported from commit e745be75fcd6dbce9997e1d73464619fc8b73996)