This includes making the default stream description setting into a
dict. That is an API change; we'll discuss it in the changelog but it
seems small enough to be OK.
With some small tweaks by tabbott to remove unnecessary backwards
compatibility code for the settings.
Fixes#2427.
This change adds support for displaying inline open graph previews for
links posted into Zulip.
It is designed to interact correctly with message editing.
This adds the new settings.INLINE_URL_EMBED_PREVIEW setting to control
whether this feature is enabled.
By default, this setting is currently disabled, so that we can burn it
in for a bit before it impacts users more broadly.
Eventually, we may want to make this manageable via a (set of?)
per-realm settings. E.g. I can imagine a realm wanting to be able to
enable/disable it for certain URLs.
This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
Previously, if a new message arrived between when a user is subscribed
to the default streams and when the user's initial messages are
queried, we would try to create two UserMessage rows for the same
Message, resulting in an IntegrityError crash. We fix this and add a
test for that race condition.
send_event() expects a list of user ids (ints) except for the special case
of messages. This commit:
1. Fixes this in the call to send_event() in do_send_typing_notification()
2. Renames the variables in do_send_typing_notification() to better reflect
their content (for example, recipient_ids instead of recipients).
3. Renames the id field in the dicts sent in the typing event body (sender,
recipients) to user_id.
4. Adds assertions to the tests to verify that the tornado event user ids
are the same as the recipients in the event body.
5. Adds assertions to the tests to verify that the tornado event user
ids and the recipient user ids (in the event body) are the same as the
expected user ids (obtained from the emails using
get_user_profile_by_email)
6. Changes all assertTrues to assertEquals in the tests
This fixes#2151.
If a stream is public, we now send notifications to all realm users
if the name or description of the stream changes. For private
streams, the behavior remains the same.
We do this by introducing a method called
can_access_stream_user_ids().
(showell helped with this fix)
Fixes#2195
Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
In preparation for a change to do_create_realm where we will use the
database default for restricted_to_domain rather than computing it within
do_create_realm, and due to which do_create_realm will no longer know
whether we are creating an open realm or not.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
We now simply exclude all cross-realm bots from the set of emails
under consideration, and then if the remaining emails are all in
the same realm, we're good.
This fix changes two behaviors:
* You can no longer send a PM to an ordinary user in another realm
by piggy-backing a cross-realm bot on to the message. (This was
basically a bug, but it would never manifest under current
configurations.)
* You will be able to send PMs to multiple cross-realm bots at once.
(This was an arbitrary restriction. We don't really care about this
scenario much yet, and it fell out of the new implementation.)