Since this is currently only useful to interpret presence data, we
send this only if presence is requested.
I'm not sure that server_timestamp is the right name for this field,
but ultimately it should match the main presence API format.
Event of type restart could not be handled properly, because of
its special behavior. For handling this event in most natural way
we recursively call `do_events_register` when restart event is
recieved, based on custom error created for this event.
Testing: Second call to get_user_events due to recursive calling
of do_event_register, is expected to not contain the restart event.
So new test added in test_event_system.py are based on above behavior
of get_user_events.
Fixes: #15541.
django.utils.translation.ugettext is a deprecated alias of
django.utils.translation.gettext as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Organization admins can use this setting to restrict the maximum
rating of GIFs that will be retrieved from GIPHY. Also, there
is option to disable GIPHY too.
* This introduces a new event type `realm_linkifiers` and
a new key for the initial data fetch of the same name.
Newer clients will be expected to use these.
* Backwards compatibility is ensured by changing neither
the current event nor the /register key. The data which
these hold is the same as before, but internally, it is
generated by processing the `realm_linkifiers` data.
We send both the old and the new event types to clients
whenever the linkifiers are changed.
Older clients will simply ignore the new event type, and
vice versa.
* The `realm/filters:GET` endpoint (which returns tuples)
is currently used by none of the official Zulip clients.
This commit replaces it with `realm/linkifiers:GET` which
returns data in the new dictionary format.
TODO: Update the `get_realm_filters` method in the API
bindings, to hit this new URL instead of the old one.
* This also updates the webapp frontend to use the newer
events and keys.
Commit 4a3ad0d introduced some extra stream-level parameters
to the `realm` object. This commit extends that to add a
max_message_length paramter too in the same server_level.
Previously, you had to request the `stream` event type in order to get
the stream-level parameters; this was a bad design in part because the
`subscription` event type has similar data and is preferred by most
clients.
So we move these to the `realm` object. We also add the maximum topic
length, as an adjacent parameter.
While changing this, we also fix these to better match the names of
similar API parameters.
This commit adds backend code for passing can_invite_others_to_realm
field to clients using the fetch_initial_state_data in the page_params
object.
Though this field is not used by webapp as of now, but will be used
to fix a bug of incorreclty showing the invite users option in
settings overlay in the next commit.
We send the whole data set as a part of the event rather than
doing an add/remove operation for couple of reasons:
* This would make the client logic simpler.
* The playground data is small enough for us to not worry
about performance.
Tweaked both `fetch_initial_state_data` and `apply_events` to
handle the new playground event.
Tests added to validate the event matches the expected schema.
Documented realm_playgrounds sections inside /events and
/register to support our openapi validation system in test_events.
Tweaked other tests like test_event_system.py and test_home.py
to account for the new event being generated.
Lastly, documented the changes to the API endpoints in
api/changelog.md and bumped API_FEATURE_LEVEL.
Tweaked by tabbott to add an `id` field in RealmPlayground objects
sent to clients, which is essential to sending the API request to
remove one.
Realm administrators already get creation and deletion events for all
streams, including private streams. So these should be reflected in
the initial state data.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adds backend code for the mute users feature.
This is just infrastructure work (database
interactions, helpers, tests, events, API docs
etc) and does not involve any behavioral/semantic
aspects of muted users.
Adds POST and DELETE endpoints, to keep the
URL scheme mostly consistent in terms of `users/me`.
TODOs:
1. Add tests for exporting `zulip_muteduser` database table.
2. Add dedicated methods to python-zulip-api to be used
in place of the current `client.call_endpoint` implementation.
We use GIPHY web SDK to create popover containing GIFs in a
grid format. Simply clicking on the GIFs will insert the GIF in the compose
box.
We add GIPHY logo to compose box action icons which opens the GIPHY
picker popover containing GIFs with "Powered by GIPHY"
attribution.
I noticed this because the test_events.py tests had the extremely
weird pattern of calling the actual change function, and then testing
the `notify` function's state changes (which should always be noops),
rather than actually testing the state change function.
Fixing the test made it clear that the actual logic in events.py
simply did not handle deleting custom_profile_field_value elements
from user objects when a custom_profile_field object was deleted.
So we fix that bit of logic as well.
It appears this bug was unique -- at least we don't have any other
notify_* functions being used directly in test_events.py, and the
handful of state_change_expected=False entries are all events for data
not present in page_params.
This add the schema checker, openapi schema, and also a test for
realm/deactivated event.
With several block comments by tabbott explaining the logic behind our
behavior here.
Part of #17568.
We discovered recently that some ops for events were just not
implemented in events.py (specifically, realm/deactivated).
Since our goal is for events.py to be complete, we add this bit of
hardening to ensure that it stays that way.
When we were getting an apply_event call for
a subscription/add event, we were trying not to
mutate the event itself, but this clumsy code
was still mutating the actual event:
# Avoid letting 'subscribers' entries end up in the list
for i, sub in enumerate(event['subscriptions']):
event['subscriptions'][i] = \
copy.deepcopy(event['subscriptions'][i])
del event['subscriptions'][i]['subscribers']
This is only a theoretical bug.
The only person who receives a subscription/add
event is the current user.
And it wouldn't have affected the current user,
since the apply_event was correctly updating the
state, and we wouldn't actually deliver the event
to the client (because the whole point of apply_event
is to prevent us from having to piggyback the
super-recent events on to our payload or put
them into the event queue and possibly race).
The new code just cleanly makes a copy of each
sub, if necessary, as we add them to state["subscriptions"].
And I updated the event schemas to reflect that
subscribers is always present in subscription/add
event.
Long term we should probably avoid sending subscribers
on this event when the clients don't set something
like include_subscribers. That's a fairly complicated
fix that involves passing in flags to ClientDescriptor.
Alternatively, we could just say that our policy is
that we never send subscribers there, but we instead
use peer_add events. See issue #17089 for more
details.
It's always cleaner to work in id space. It probably
would have required a perfect storm to have broken
the existing code, but using ids is obviously more
robust in theory, and just as simple.
We now require keywords, so that there is no
pitfall for mixing up boolean parameters.
Positional parameters are basically evil
when you have a bunch of bools.
I also make user_profile the first argument.
Finally, the code is more diff-friendly.
I eliminate the defaults, since the existing code
was already specificying values for most things.
I move all the booleans to the bottom for both
parameters and arguments.
I require explicit keywords for everything but
user_profile (which is now first).
And, finally, I format the code in a more
diff-friendly manner.
We now require explicit keywords for all arguments
to fetch_initial_state_data except user_profile.
We provide reasonable defaults to keep the test
code concise.
In 1bcb8d8ee8 I made
it so the webapp doesn't include "streams" in its
state from `fetch_initial_state_data`, but I didn't
address all the places in apply_event.
We now can send an implied matrix of user/stream tuples
for peer_add and peer_remove events.
The client code basically does this:
for stream_id in event['stream_ids']:
for user_id in event['user_ids']:
update_sub(stream_id, user_id)
We used to send individual events, which gets real
expensive when you are creating new streams. For
the case of copy-to-stream case, we should see
events go from U to 1, where U is the number of users
added.
Note that we don't yet fully optimize the potential
of this schema. For adding a new user with lots
of default streams, we still send S peer_add events.
And if you subscribe a bunch of users to a bunch of
private streams, we only go from U * S to S; we can't
optimize it down to one event easily.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
The query to get "occupied" streams has been expensive
in the past. I'm not sure how much any recent attempts
to optimize that query have mitigated the issue, but
since we clearly aren't sending this data, there is no
reason to compute it.