Since this is currently only useful to interpret presence data, we
send this only if presence is requested.
I'm not sure that server_timestamp is the right name for this field,
but ultimately it should match the main presence API format.
This help mobile and terminal clients understand whether a server
restart changed API feature levels or not, which in turn determines
whether they will need to resynchronize their data.
Also add tests and documentation for this previously undocumented
event type.
Fixes: #18205.
This extends the /json/typing endpoint to also accept
stream_id and topic. With this change, the requests
sent to /json/typing should have these:
* `to`: a list set to
- recipients for a PM
- stream_id for a stream message
* `topic`, in case of stream message
along with `op`(start or stop).
On receiving a request with stream_id and topic, we send
typing events to clients with stream_typing_notifications set
to True for all users subscribed to that stream.
This commit adds an API to `zproject/urls.py` to edit/update
the realm linkifier. Its helper function to update the
database is added in `zerver/lib/actions.py`.
`zulip.yaml` is documented accordingly as well, clearly
stating that this API updates one linkifier at a time.
The tests are added for the API and helper function which
updates the realm linkifier.
Fixes#10830.
Organization admins can use this setting to restrict the maximum
rating of GIFs that will be retrieved from GIPHY. Also, there
is option to disable GIPHY too.
* This introduces a new event type `realm_linkifiers` and
a new key for the initial data fetch of the same name.
Newer clients will be expected to use these.
* Backwards compatibility is ensured by changing neither
the current event nor the /register key. The data which
these hold is the same as before, but internally, it is
generated by processing the `realm_linkifiers` data.
We send both the old and the new event types to clients
whenever the linkifiers are changed.
Older clients will simply ignore the new event type, and
vice versa.
* The `realm/filters:GET` endpoint (which returns tuples)
is currently used by none of the official Zulip clients.
This commit replaces it with `realm/linkifiers:GET` which
returns data in the new dictionary format.
TODO: Update the `get_realm_filters` method in the API
bindings, to hit this new URL instead of the old one.
* This also updates the webapp frontend to use the newer
events and keys.
When a user is muted, in the same request,
we mark any existing unreads from that user
as read.
This is done for all types of messages
(PM/huddle/stream) and regardless of whether
the user was mentioned in them.
This will not break the unread count logic
of the web frontend, because that algorithm
decides which messages to mark as read based
only on the pointer location and the whitespace
at the bottom, not on what messages have already
been marked as read.
Previously, when unmuting a user, we used to make
two database fetches - one to verify that the user
is has been muted before, and one while actually
unmuting the user.
This reduces that to one, by passing around the
`MutedUser` object fetched in the first round.
Since the new function returns `Optional[MutedUser]`,
we need to use a hack for events tests, because
mypy does not yet use the type inferred from
`assert foo is not None` in nested functions like lambdas.
See python/mypy@8780d45507.
The moderator role was not included in the tests for create_stream_policy
and invite_to_stream_policy. The tests are do_set_realm_property_test
in test_events.py and do_test_realm_update_api in test_realm.py.
This should have been added for create_stream_policy in 5b32dcd and
in 5b32dcd for invite_to_stream_policy, but was missed by mistake.
We add moderators and full members option to invite_to_realm_policy
by using COMMON_POLICY_TYPES and use can_invite_others_to_realm helper
added in previous commit. This commit only does the backend work,
frontend work will be done in separate commit.
This commit replaces invite_by_admins_policy, which was a bool field,
with a new enum field invite_by_realm_policy.
Though the final goal is to add moderators and full members option
using COMMON_POLICY_TYPES, but this will be done in a separate
commit to make this easy for review.
We send the whole data set as a part of the event rather than
doing an add/remove operation for couple of reasons:
* This would make the client logic simpler.
* The playground data is small enough for us to not worry
about performance.
Tweaked both `fetch_initial_state_data` and `apply_events` to
handle the new playground event.
Tests added to validate the event matches the expected schema.
Documented realm_playgrounds sections inside /events and
/register to support our openapi validation system in test_events.
Tweaked other tests like test_event_system.py and test_home.py
to account for the new event being generated.
Lastly, documented the changes to the API endpoints in
api/changelog.md and bumped API_FEATURE_LEVEL.
Tweaked by tabbott to add an `id` field in RealmPlayground objects
sent to clients, which is essential to sending the API request to
remove one.
Adds backend code for the mute users feature.
This is just infrastructure work (database
interactions, helpers, tests, events, API docs
etc) and does not involve any behavioral/semantic
aspects of muted users.
Adds POST and DELETE endpoints, to keep the
URL scheme mostly consistent in terms of `users/me`.
TODOs:
1. Add tests for exporting `zulip_muteduser` database table.
2. Add dedicated methods to python-zulip-api to be used
in place of the current `client.call_endpoint` implementation.
I noticed this because the test_events.py tests had the extremely
weird pattern of calling the actual change function, and then testing
the `notify` function's state changes (which should always be noops),
rather than actually testing the state change function.
Fixing the test made it clear that the actual logic in events.py
simply did not handle deleting custom_profile_field_value elements
from user objects when a custom_profile_field object was deleted.
So we fix that bit of logic as well.
It appears this bug was unique -- at least we don't have any other
notify_* functions being used directly in test_events.py, and the
handful of state_change_expected=False entries are all events for data
not present in page_params.
* `op` (operation) field, added in f6fb88549f, was never intended for
`custom_profile_fields` event. This commit removes the `op` as it doesn't
have any use in the code.
* As a part of cleanup, this also eliminates the schema check warnings
for `custom_profile_fields` event, mentioned in #17568.
This add the schema checker, openapi schema, and also a test for
realm/deactivated event.
With several block comments by tabbott explaining the logic behind our
behavior here.
Part of #17568.
TextField is used to allow users to set long stream + topic narrow
names in the urls.
We currently restrict users to only set "all_messages" and
"recent_topics" as narrows.
This commit achieves 3 things:
* Removes recent topics as the default view which loads when
hash is empty.
* Loads default_view when hash is empty.
* Loads default_view on pressing escape key when it is unhandled by
other present UI elements.
NOTE: After this commit loading zulip with an empty hash will
automatically set hash to default_view. Ideally, we'd just display
the default view without a hash, but that involves extra complexity.
One exception is when user is trying to load an overlay directly,
i.e. zulip is loaded with an overlay hash. In this case,
we render recent topics is background irrespective of default_view.
We consider this last detail to be a bug not important enough to block
adding this setting.
When we were getting an apply_event call for
a subscription/add event, we were trying not to
mutate the event itself, but this clumsy code
was still mutating the actual event:
# Avoid letting 'subscribers' entries end up in the list
for i, sub in enumerate(event['subscriptions']):
event['subscriptions'][i] = \
copy.deepcopy(event['subscriptions'][i])
del event['subscriptions'][i]['subscribers']
This is only a theoretical bug.
The only person who receives a subscription/add
event is the current user.
And it wouldn't have affected the current user,
since the apply_event was correctly updating the
state, and we wouldn't actually deliver the event
to the client (because the whole point of apply_event
is to prevent us from having to piggyback the
super-recent events on to our payload or put
them into the event queue and possibly race).
The new code just cleanly makes a copy of each
sub, if necessary, as we add them to state["subscriptions"].
And I updated the event schemas to reflect that
subscribers is always present in subscription/add
event.
Long term we should probably avoid sending subscribers
on this event when the clients don't set something
like include_subscribers. That's a fairly complicated
fix that involves passing in flags to ClientDescriptor.
Alternatively, we could just say that our policy is
that we never send subscribers there, but we instead
use peer_add events. See issue #17089 for more
details.
I eliminate the defaults, since the existing code
was already specificying values for most things.
I move all the booleans to the bottom for both
parameters and arguments.
I require explicit keywords for everything but
user_profile (which is now first).
And, finally, I format the code in a more
diff-friendly manner.
We often send only one field (away or status_text)
to be updated.
So we have to make our schema support optional
keys.
As a result of the more flexible schema, we no
longer need to exempt the node fixtures from
our schema checks.
We now require explicit keywords for all arguments
to fetch_initial_state_data except user_profile.
We provide reasonable defaults to keep the test
code concise.
In 1bcb8d8ee8 I made
it so the webapp doesn't include "streams" in its
state from `fetch_initial_state_data`, but I didn't
address all the places in apply_event.
While working on shifting toward native browser time zone APIs
(#16451), it was found that all but very recent Chrome and Node
versions reject certain legacy timezone aliases like US/Pacific
(https://crbug.com/364374).
For now, we only canonicalize the timezone property returned in user
objects and not the timezone setting itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We no bulk up peer_add/peer_remove events by user if the
same user has subscribed to multiple streams (and just
that single user).
This mostly optimizes the new-user codepath, but the
algorithm is a bit more general in nature.
Because of the very large `oneOf` clause of the formats of events
possible in Zulip's `GET /events` system, we had issues with
`test-backend` failures for missing documentation for a new event
format being like 1000 lines of output, which was very much unhelpful.
Fix this by limiting the output use only the oneOf variants that are
broadly similar to the actual payload received.
Fixes#16023.
The comment still pointed to 'vacate' event flow, but
we have removed the vacate event in a9356508ca.
This commit fixes the comment to depict the correct
purpose of below lines, i.e. to test the remove
event flow.
We were including 'realm_user' in event_types along with 'subscription',
but we don't send event of type 'realm_user' when subscribing to a new
stream. This was added in 1c332f5d6a.
This commit removes 'realm_user' from event_types.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
We add a new wildcard_mention_policy setting to handle wildcard
mentions in large streams, with a wide range of policies available to
organizations.
We set the default to the safe option for preventing accidental spam:
only stream administrators being able to use wildcard mentions in
large streams.
We now no longer define any schemas in test_events--all
of them are in event_schema, which helps our tooling
cross-check schemas for openapi and node tests.
It happens that whether you add a reaction or remove
a reaction, we send the exact same fields, just using
a different op code.
This sort of symmetry is actually kind of rare, as
usually "add" events have more fields, and "remove" events
might just send an id of something to remove.
Our openapi schema treats these as two seperate events,
so we are more consistent with it, and it helps our
schema-checking tooling for node fixtures, too.
Note that we now have to exempt the two events from
our openapi checks, due to the is_mirror_dummy field
in the deprecated user block. We can decide how to
handle this later--one possibility is to just add it
as an optional field on the event_schema side.
Note that we use value_type for value instead of
bool, since properties can be non-bool things
like color, which we just don't test now. We
should test them.
We more than compensate for this by checking
the actual value of the value in
check_subscription_update.
There is a legacy format where we send
singular "message_id" instead of plural
"message_ids".
Then there are different fields for "private"
and "stream" message types.
Note that we make the schema for profile_data
slightly more realistic, but it doesn't actually get
exercised by our current tests (apart from
making sure it's a dict), since we don't have
profile data for our test realm.
We also don't have the optional fields for bots,
since our tests don't exercise that, nor
delivery_email.
So we exempt realm_user_add_event from openapi
checks for now.
When we try to match the openapi specs better, we
will probably want to add a few tests to test_events.
Obviously getting good coverage for adding users
would be nice for all these scenarios:
* delivery_email matters
* bots
* realm has profile fields
This also forces us to create TupleType.
We exempt this from the openapi check,
since we haven't figured out how to model
tuples in openapi with the same precision
as event_schema (and it may be impossible).
Long term we just want to stop dealing in
tuples, of course.
We also make our schema in event_schema reflect this,
which in turn makes us match the already accurate
openapi spec, so we no longer need to exempt four
types of events from our sanity checks.
The `typing: stop` event did not have any tests in test_events
hence its documentation wasn't added. So add tests and relevant
documentation for the typing stop event. Also edit the documentation
of `typing: start` to include the fact that servers should use
their own timeout incase `stop` event event isn't received.
Fixes#16122.
The original commit was broken here:
b553507412
The intention was to run the same loop for all
settings, but instead, we did a funny loop of
just resetting schema_checker, and then we only
actually tested the last value of the loop.
In f8bcf39014, we fixed buggy
marshalling of Streams and similar data structures where we were
including the Stream object rather than its ID in dictionaries passed
to ujson, and ujson happily wrote that large object dump into the
RealmAuditLog.extra_data field.
This commit includes a migration to fix those corrupted RealmAuditLog
entries, and because the migration loop is the same, also fixes the
format of similar RealmAuditLog entries to be in a more natural format
that doesn't weirdly nest and duplicate the "property" field.
Fixes#16066.