Further commits will hook some `send_event` calls to `on_commit`.
With those changes, these will never be executed in tests, because
transactions never get commited with `TestCase`, which the
`ZulipTestCase` is a subclass of.
We want to make sure that these events are actually sent for testing
purposes, hence this change.
There's no need to actually capture the callbacks, because the
events are already thoroughly tested.
Since this is currently only useful to interpret presence data, we
send this only if presence is requested.
I'm not sure that server_timestamp is the right name for this field,
but ultimately it should match the main presence API format.
This help mobile and terminal clients understand whether a server
restart changed API feature levels or not, which in turn determines
whether they will need to resynchronize their data.
Also add tests and documentation for this previously undocumented
event type.
Fixes: #18205.
This extends the /json/typing endpoint to also accept
stream_id and topic. With this change, the requests
sent to /json/typing should have these:
* `to`: a list set to
- recipients for a PM
- stream_id for a stream message
* `topic`, in case of stream message
along with `op`(start or stop).
On receiving a request with stream_id and topic, we send
typing events to clients with stream_typing_notifications set
to True for all users subscribed to that stream.
This commit adds an API to `zproject/urls.py` to edit/update
the realm linkifier. Its helper function to update the
database is added in `zerver/lib/actions.py`.
`zulip.yaml` is documented accordingly as well, clearly
stating that this API updates one linkifier at a time.
The tests are added for the API and helper function which
updates the realm linkifier.
Fixes#10830.
Organization admins can use this setting to restrict the maximum
rating of GIFs that will be retrieved from GIPHY. Also, there
is option to disable GIPHY too.
* This introduces a new event type `realm_linkifiers` and
a new key for the initial data fetch of the same name.
Newer clients will be expected to use these.
* Backwards compatibility is ensured by changing neither
the current event nor the /register key. The data which
these hold is the same as before, but internally, it is
generated by processing the `realm_linkifiers` data.
We send both the old and the new event types to clients
whenever the linkifiers are changed.
Older clients will simply ignore the new event type, and
vice versa.
* The `realm/filters:GET` endpoint (which returns tuples)
is currently used by none of the official Zulip clients.
This commit replaces it with `realm/linkifiers:GET` which
returns data in the new dictionary format.
TODO: Update the `get_realm_filters` method in the API
bindings, to hit this new URL instead of the old one.
* This also updates the webapp frontend to use the newer
events and keys.
When a user is muted, in the same request,
we mark any existing unreads from that user
as read.
This is done for all types of messages
(PM/huddle/stream) and regardless of whether
the user was mentioned in them.
This will not break the unread count logic
of the web frontend, because that algorithm
decides which messages to mark as read based
only on the pointer location and the whitespace
at the bottom, not on what messages have already
been marked as read.
Previously, when unmuting a user, we used to make
two database fetches - one to verify that the user
is has been muted before, and one while actually
unmuting the user.
This reduces that to one, by passing around the
`MutedUser` object fetched in the first round.
Since the new function returns `Optional[MutedUser]`,
we need to use a hack for events tests, because
mypy does not yet use the type inferred from
`assert foo is not None` in nested functions like lambdas.
See python/mypy@8780d45507.
The moderator role was not included in the tests for create_stream_policy
and invite_to_stream_policy. The tests are do_set_realm_property_test
in test_events.py and do_test_realm_update_api in test_realm.py.
This should have been added for create_stream_policy in 5b32dcd and
in 5b32dcd for invite_to_stream_policy, but was missed by mistake.
We add moderators and full members option to invite_to_realm_policy
by using COMMON_POLICY_TYPES and use can_invite_others_to_realm helper
added in previous commit. This commit only does the backend work,
frontend work will be done in separate commit.
This commit replaces invite_by_admins_policy, which was a bool field,
with a new enum field invite_by_realm_policy.
Though the final goal is to add moderators and full members option
using COMMON_POLICY_TYPES, but this will be done in a separate
commit to make this easy for review.
We send the whole data set as a part of the event rather than
doing an add/remove operation for couple of reasons:
* This would make the client logic simpler.
* The playground data is small enough for us to not worry
about performance.
Tweaked both `fetch_initial_state_data` and `apply_events` to
handle the new playground event.
Tests added to validate the event matches the expected schema.
Documented realm_playgrounds sections inside /events and
/register to support our openapi validation system in test_events.
Tweaked other tests like test_event_system.py and test_home.py
to account for the new event being generated.
Lastly, documented the changes to the API endpoints in
api/changelog.md and bumped API_FEATURE_LEVEL.
Tweaked by tabbott to add an `id` field in RealmPlayground objects
sent to clients, which is essential to sending the API request to
remove one.
Adds backend code for the mute users feature.
This is just infrastructure work (database
interactions, helpers, tests, events, API docs
etc) and does not involve any behavioral/semantic
aspects of muted users.
Adds POST and DELETE endpoints, to keep the
URL scheme mostly consistent in terms of `users/me`.
TODOs:
1. Add tests for exporting `zulip_muteduser` database table.
2. Add dedicated methods to python-zulip-api to be used
in place of the current `client.call_endpoint` implementation.
I noticed this because the test_events.py tests had the extremely
weird pattern of calling the actual change function, and then testing
the `notify` function's state changes (which should always be noops),
rather than actually testing the state change function.
Fixing the test made it clear that the actual logic in events.py
simply did not handle deleting custom_profile_field_value elements
from user objects when a custom_profile_field object was deleted.
So we fix that bit of logic as well.
It appears this bug was unique -- at least we don't have any other
notify_* functions being used directly in test_events.py, and the
handful of state_change_expected=False entries are all events for data
not present in page_params.
* `op` (operation) field, added in f6fb88549f, was never intended for
`custom_profile_fields` event. This commit removes the `op` as it doesn't
have any use in the code.
* As a part of cleanup, this also eliminates the schema check warnings
for `custom_profile_fields` event, mentioned in #17568.
This add the schema checker, openapi schema, and also a test for
realm/deactivated event.
With several block comments by tabbott explaining the logic behind our
behavior here.
Part of #17568.
TextField is used to allow users to set long stream + topic narrow
names in the urls.
We currently restrict users to only set "all_messages" and
"recent_topics" as narrows.
This commit achieves 3 things:
* Removes recent topics as the default view which loads when
hash is empty.
* Loads default_view when hash is empty.
* Loads default_view on pressing escape key when it is unhandled by
other present UI elements.
NOTE: After this commit loading zulip with an empty hash will
automatically set hash to default_view. Ideally, we'd just display
the default view without a hash, but that involves extra complexity.
One exception is when user is trying to load an overlay directly,
i.e. zulip is loaded with an overlay hash. In this case,
we render recent topics is background irrespective of default_view.
We consider this last detail to be a bug not important enough to block
adding this setting.
When we were getting an apply_event call for
a subscription/add event, we were trying not to
mutate the event itself, but this clumsy code
was still mutating the actual event:
# Avoid letting 'subscribers' entries end up in the list
for i, sub in enumerate(event['subscriptions']):
event['subscriptions'][i] = \
copy.deepcopy(event['subscriptions'][i])
del event['subscriptions'][i]['subscribers']
This is only a theoretical bug.
The only person who receives a subscription/add
event is the current user.
And it wouldn't have affected the current user,
since the apply_event was correctly updating the
state, and we wouldn't actually deliver the event
to the client (because the whole point of apply_event
is to prevent us from having to piggyback the
super-recent events on to our payload or put
them into the event queue and possibly race).
The new code just cleanly makes a copy of each
sub, if necessary, as we add them to state["subscriptions"].
And I updated the event schemas to reflect that
subscribers is always present in subscription/add
event.
Long term we should probably avoid sending subscribers
on this event when the clients don't set something
like include_subscribers. That's a fairly complicated
fix that involves passing in flags to ClientDescriptor.
Alternatively, we could just say that our policy is
that we never send subscribers there, but we instead
use peer_add events. See issue #17089 for more
details.
I eliminate the defaults, since the existing code
was already specificying values for most things.
I move all the booleans to the bottom for both
parameters and arguments.
I require explicit keywords for everything but
user_profile (which is now first).
And, finally, I format the code in a more
diff-friendly manner.
We often send only one field (away or status_text)
to be updated.
So we have to make our schema support optional
keys.
As a result of the more flexible schema, we no
longer need to exempt the node fixtures from
our schema checks.
We now require explicit keywords for all arguments
to fetch_initial_state_data except user_profile.
We provide reasonable defaults to keep the test
code concise.
In 1bcb8d8ee8 I made
it so the webapp doesn't include "streams" in its
state from `fetch_initial_state_data`, but I didn't
address all the places in apply_event.
While working on shifting toward native browser time zone APIs
(#16451), it was found that all but very recent Chrome and Node
versions reject certain legacy timezone aliases like US/Pacific
(https://crbug.com/364374).
For now, we only canonicalize the timezone property returned in user
objects and not the timezone setting itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We no bulk up peer_add/peer_remove events by user if the
same user has subscribed to multiple streams (and just
that single user).
This mostly optimizes the new-user codepath, but the
algorithm is a bit more general in nature.
Because of the very large `oneOf` clause of the formats of events
possible in Zulip's `GET /events` system, we had issues with
`test-backend` failures for missing documentation for a new event
format being like 1000 lines of output, which was very much unhelpful.
Fix this by limiting the output use only the oneOf variants that are
broadly similar to the actual payload received.
Fixes#16023.
The comment still pointed to 'vacate' event flow, but
we have removed the vacate event in a9356508ca.
This commit fixes the comment to depict the correct
purpose of below lines, i.e. to test the remove
event flow.
We were including 'realm_user' in event_types along with 'subscription',
but we don't send event of type 'realm_user' when subscribing to a new
stream. This was added in 1c332f5d6a.
This commit removes 'realm_user' from event_types.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
We add a new wildcard_mention_policy setting to handle wildcard
mentions in large streams, with a wide range of policies available to
organizations.
We set the default to the safe option for preventing accidental spam:
only stream administrators being able to use wildcard mentions in
large streams.
We now no longer define any schemas in test_events--all
of them are in event_schema, which helps our tooling
cross-check schemas for openapi and node tests.
It happens that whether you add a reaction or remove
a reaction, we send the exact same fields, just using
a different op code.
This sort of symmetry is actually kind of rare, as
usually "add" events have more fields, and "remove" events
might just send an id of something to remove.
Our openapi schema treats these as two seperate events,
so we are more consistent with it, and it helps our
schema-checking tooling for node fixtures, too.
Note that we now have to exempt the two events from
our openapi checks, due to the is_mirror_dummy field
in the deprecated user block. We can decide how to
handle this later--one possibility is to just add it
as an optional field on the event_schema side.
Note that we use value_type for value instead of
bool, since properties can be non-bool things
like color, which we just don't test now. We
should test them.
We more than compensate for this by checking
the actual value of the value in
check_subscription_update.
There is a legacy format where we send
singular "message_id" instead of plural
"message_ids".
Then there are different fields for "private"
and "stream" message types.
Note that we make the schema for profile_data
slightly more realistic, but it doesn't actually get
exercised by our current tests (apart from
making sure it's a dict), since we don't have
profile data for our test realm.
We also don't have the optional fields for bots,
since our tests don't exercise that, nor
delivery_email.
So we exempt realm_user_add_event from openapi
checks for now.
When we try to match the openapi specs better, we
will probably want to add a few tests to test_events.
Obviously getting good coverage for adding users
would be nice for all these scenarios:
* delivery_email matters
* bots
* realm has profile fields
This also forces us to create TupleType.
We exempt this from the openapi check,
since we haven't figured out how to model
tuples in openapi with the same precision
as event_schema (and it may be impossible).
Long term we just want to stop dealing in
tuples, of course.
We also make our schema in event_schema reflect this,
which in turn makes us match the already accurate
openapi spec, so we no longer need to exempt four
types of events from our sanity checks.
The `typing: stop` event did not have any tests in test_events
hence its documentation wasn't added. So add tests and relevant
documentation for the typing stop event. Also edit the documentation
of `typing: start` to include the fact that servers should use
their own timeout incase `stop` event event isn't received.
Fixes#16122.
The original commit was broken here:
b553507412
The intention was to run the same loop for all
settings, but instead, we did a funny loop of
just resetting schema_checker, and then we only
actually tested the last value of the loop.
In f8bcf39014, we fixed buggy
marshalling of Streams and similar data structures where we were
including the Stream object rather than its ID in dictionaries passed
to ujson, and ujson happily wrote that large object dump into the
RealmAuditLog.extra_data field.
This commit includes a migration to fix those corrupted RealmAuditLog
entries, and because the migration loop is the same, also fixes the
format of similar RealmAuditLog entries to be in a more natural format
that doesn't weirdly nest and duplicate the "property" field.
Fixes#16066.
Use `ujson.loads(ujson.dumps())` wrapper on events sent for OpenAPI
testing so that all tuples are converted into arrays as tuples aren't
valid in JSON.
Edit the function `validate_against_openapi_schema` and add some
helper functions to allow for validation of documented events.
Also add OpenAPI response validation in `verify_action` as it is
called in a large number of `/events` tests.
We also have the caller pass in the property name for an
additional sanity check.
Note that we don't yet handle the possibility of extra_data;
that will be a subsequent commit.
Also, the stream_id fields aren't in Realm.property_types,
so we specify their types in the checker.
This a pretty big commit, but I really wanted it
to be atomic.
All realm_user/update events look the same from
the top:
_check_realm_user_update = check_events_dict(
required_keys=[
("type", equals("realm_user")),
("op", equals("update")),
("person", _check_realm_user_person),
]
)
And then we have a bunch of fields for person that
are optional, and we usually only send user_id plus
one other field, with the exception of avatar-related
events:
_check_realm_user_person = check_dict_only(
required_keys=[
# vertical formatting
("user_id", check_int),
],
optional_keys=[
("avatar_source", check_string),
("avatar_url", check_none_or(check_string)),
("avatar_url_medium", check_none_or(check_string)),
("avatar_version", check_int),
("bot_owner_id", check_int),
("custom_profile_field", _check_custom_profile_field),
("delivery_email", check_string),
("full_name", check_string),
("role", check_int_in(UserProfile.ROLE_TYPES)),
("email", check_string),
("user_id", check_int),
("timezone", check_string),
],
)
I would start the code review by just skimming the changes
to event_schema.py, to get the big picture of the complexity
here. Basically the schema is just the combined superset of
all the individual schemas that we remove from test_events.
Then I would read test_events.py.
The simplest diffs are basically of this form:
- schema_checker = check_events_dict([
- ('type', equals('realm_user')),
- ('op', equals('update')),
- ('person', check_dict_only([
- ('role', check_int_in(UserProfile.ROLE_TYPES)),
- ('user_id', check_int),
- ])),
- ])
# ...
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
Instead of a custom schema checker, we use the "superset"
schema checker, but then we pass in the set of fields that we
expect to be there. Note that 'user_id' is always there.
So most of the heavy lifting happens in this new function
in event_schema.py:
def check_realm_user_update(
var_name: str, event: Dict[str, Any], optional_fields: Set[str],
) -> None:
_check_realm_user_update(var_name, event)
keys = set(event["person"].keys()) - {"user_id"}
assert optional_fields == keys
But we still do some more custom checks in test_events.py.
custom profile fields: check keys of custom_profile_field
def test_custom_profile_field_data_events(self) -> None:
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value", "rendered_value"}
+ )
+ check_realm_user_update('events[0]', events[0], {"custom_profile_field"})
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value"}
+ )
avatar fields: check more specific types, since the superset
schema has check_none_or(check_string)
def test_change_avatar_fields(self) -> None:
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ assert isinstance(events[0]['person']['avatar_url'], str)
+ assert isinstance(events[0]['person']['avatar_url_medium'], str)
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ self.assertEqual(events[0]['person']['avatar_url'], None)
+ self.assertEqual(events[0]['person']['avatar_url_medium'], None)
Also note that avatar_fields is a set of four fields that
are set in event_schema.
full name: no extra work!
def test_change_full_name(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'full_name'})
test_change_user_delivery_email_email_address_visibilty_admins:
no extra work for delivery_email
check avatar fields more directly
roles (several examples) -- actually check the specific role
def test_change_realm_authentication_methods(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
+ self.assertEqual(events[0]['person']['role'], role)
bot_owner_id: no extra work!
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
timezone: no extra work!
- timezone_schema_checker('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"email", "timezone"})
Obviously, this file will soon grow--this
was the easiest way to start without introducing
noise into other commits.
It will soon be structurally similar
to frontend_tests/node_tests/lib/events.js--I
have some ideas there. But this should also
help for things like API docs.
We add the ability to supply optional_keys,
and we don't mutate the list of required
keys that gets passed into us.
We also enforce that there is a "type"
field.
(We will use optional_keys soon.)
This commit verify that error logging while testing data export in
test_notify_realm_export_on_failure using assertLogs so that the logs
do not spam test output.
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
Log RealmAuditLog in do_set_realm_property and do_remove_realm_domain.
Tests for the changes are written in test_events because it will save
duplicate code for test_change_realm_property.
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
I also fix the code formatting so it's more
considerate of folks that have smaller monitors
or do side-by-side editing. And it's more
diff friendly as well.
I want to avoid creating the same scheme checkers
for multiple tests, and it's easiest to create the
schema checkers at module scope (and we possibly
even move them to another file eventually).
This will allow us to more easily instrument our
code to find duplicate schemas.
The goal here is to make test_events.py be
primarily focused on testing specific actions
and then validating:
- schemas
- apply_events logic
Then the new module here is a bit of a
kitchen sink of old tests, although it's
primarily focused on the actual mechanics
of the event system:
- logging
- register
- queue_ids
- fetching initial state
- client descriptors
The classes toward the bottom arguably
should go into more feature-specific
test modules, but the main goal now is
to purify test_events.py. (We may eventually
want to rename test_events.py to something
more like test_action_events.py, but the
current name has some doc references and
tribal knowledge around it.)
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
Change variable `name` to `date_sent` as `name` actually stores
the date sent. Also change the data types of `name` and `create_time`
to integer. As they actually have empty decimal value.
This avoids some code duplication as well
as adding some missing fields.
We also use check_dict_only to prevent
folks from adding new fields to the
relevant events without updating these
tests. (A bigger sweep comes later.)
As the code comment indicates, we just
use a strict check here rather than
pretending that the test exercises a
more complicated schema for the config
data, which is dynamic in nature.
Cleaning up config_data is outside the
scope of this PR; my main goal is to
eliminate check_dict calls (usually in favor
of check_dict_only).
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
With this implementation of the feature of the automatic theme
detection, we make the following changes in the backend, frontend and
documentation.
This replaces the previous night_mode boolean with an enum, with the
default value being to use the prefers-color-scheme feature of the
operating system to determine which theme to use.
Fixes: #14451.
Co-authored-by: @kPerikou <44238834+kPerikou@users.noreply.github.com>
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
Fixes#15285
This event will be used more now for guest users when moving
topic between streams (See #15277). So, instead of deleting
messages in the topic as part of different events which is
very slow and a bad UX, we now handle the messages to delete in
bulk which is a much better UX.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Option to disable breadcrumb messages were given in both message edit
form and topic edit stream popover.
User now has the option to select which stream to send the notification
of stream edit of a topic via checkboxes in the UI.
This reimplements our Zoom video call integration to use an OAuth
application. In addition to providing a cleaner setup experience,
especially on zulipchat.com where the server administrators can have
done the app registration already, it also fixes the limitation of the
previous integration that it could only have one call active at a time
when set up with typical Zoom API keys.
Fixes#11672.
Co-authored-by: Marco Burstein <marco@marco.how>
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This commit changes the person dict in event sent by do_change_user_role
to send role instead of is_admin or is_guest.
This makes things much more straightforward for our upcoming primary
owners feature.
mock is just a backport of the standard library’s unittest.mock now.
The SAMLAuthBackendTest change is needed because
MagicMock.call_args.args wasn’t introduced until Python
3.8 (https://bugs.python.org/issue21269).
The PROVISION_VERSION bump is skipped because mock is still an
indirect dev requirement via moto.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit merges do_change_is_admin and do_change_is_guest to a
single function do_change_user_role which will be used for changing
role of users.
do_change_is_api_super_user is added as a separate function for
changing is_api_super_user field of UserProfile.
I'm not sure exactly what series of history got us here, but we were
fetching the mobile_user_ids data for all users in the organization,
regardless of whether they were recently active (and thus relevant for
the main presence data set). And doing so in a sloppy fashion
(sending every user ID over the wire, rather than just having the
database join on Realm).
Fixing this saves a factor of 4-5 on the total runtime of a presence
request on organizations with 10Ks of users like chat.zulip.org; more
like 25% in an organization with 150. Since large organizations are
very heavily weighted in the overall cost of presence, this is a huge
win.
Fixes part of #13734.
The `email` field for identifying the user being modified in these
events was not used by either the webapp or other official Zulip
clients. Instead, it was legacy data from before we switched years
ago to sending user_id fields as the correct way to uniquely identify
a user.
When a user changes its avatar image, the user's avatar in popovers
wasn't being correctly updated, because of browser caching of the
avatar image. We added a version on the request to get the image in
the same format we use elsewhere, so the browser knows when to use the
cached image or to make a new request to the server.
Edited by Tim to preserve/fix sort orders in some tests, and update
zulip_feature_level.
Fixes: #14290
We remove the `owner` field from `page_params/realm_bots`
and bot-related events.
In the recent commit 155f6da8ba
we added `owner_id`, which we now use everywhere we need
bot owners for.
We also bump the `API_FEATURE_LEVEL` to 5 here. We
had already documented this in the prior commit to
add `owner_id`.
Note that we don't have to worry about mobile/ZT clients
here--we only deal with bot data in the webapp.
For the below payloads we want `owner_id` instead
of `owner`, which we should deprecate. (The
`owner` field is actually an email, which is
not a stable key.)
page_params.realm_bots
realm_bot/add
realm_bot/update
IMPORTANT NOTE: Some of the data served in
these payloads is cached with the key
`bot_dicts_in_realm_cache_key`.
For page_params, we get the new field
via `get_owned_bot_dicts`.
For realm_bot/add, we modified
`created_bot_event`.
For realm_bot/update, we modified
`do_change_bot_owner`.
On the JS side, we no longer
look up the bot's owner directly in
`server_events_dispatch` when we get
a realm_bot/update event. Instead, we
delegate that job to `bot_data.js`.
I modified the tests accordingly.
It's a preliminary step to enable message_retention_setting in org settings
UI, which is a non-limited plan only feature. So we require a page_param
property that tells us the limited-plan state of the Zulip realm.
Prior to this change, there were reports of 500s in
production due to `export.extra_data` being a
Nonetype. This was reproducible using the s3
backend in development when a row was created in
the `RealmAuditLog` table, but the export failed in
the `DeferredWorker`. This left an entry lying
about that was never updated with an `extra_data`
field.
To fix this, we catch any exceptions in the
`DeferredWorker`, and then update `extra_data` to
encode the failure. We also fix the fact that we
never updated the export UI table with pending exports.
These changes also negated the use for the somewhat
hacky `clear_success_banner` logic.