response: Remove "result: partially_completed" for success responses.

In feature levels 153 and 154, a new value of "partially_completed"
for `result` in a success (HTTP status code 200) was added for two
endpoints that process messages in batches: /api/delete-topic and
/api/mark-all-as-read.

Prior to these changes, `result` was either "success" or "error" for
all responses, which was a useful API invariant to have for clients.

So, here we remove "partially_completed" as a potential value for
"result" in a response. And instead, for the two endpoints noted
above, we return a boolean field "complete" to indicate if the
response successfully deleted/marked as read all the targeted
messages (complete: true) or if only some of the targeted messages
were processed (complete: false).

The "code" field for an error string that was also returned as part
of a partially completed response is removed in these changes as
well.

The web app does not currently use the /api/mark-all-as-read
endpoint, but it does use the /api/delete-topic endpoint, so these
changes update that to check the `complete` boolean instead of the
string value for `result`.
This commit is contained in:
Lauryn Menard 2023-09-07 17:45:10 +02:00 committed by Tim Abbott
parent 8d29ad7325
commit 31daef7f79
10 changed files with 131 additions and 93 deletions

View File

@ -20,6 +20,17 @@ format used by the Zulip server that they are interacting with.
## Changes in Zulip 8.0 ## Changes in Zulip 8.0
**Feature level 211**
* [`POST /streams/{stream_id}/delete_topic`](/api/delete-topic),
[`POST /mark_all_as_read`](/api/mark-all-as-read):
Added a `complete` boolean field in the success response to indicate
whether all or only some of the targeted messages were processed.
This replaces the use of `"result": "partially_completed"` (introduced
in feature levels 154 and 153), so that these endpoints now send a
`result` string of either `"success"` or `"error"`, like the rest of
the Zulip API.
**Feature level 210** **Feature level 210**
* [`POST /register`](/api/register-queue), [`PATCH /settings`](/api/update-settings), * [`POST /register`](/api/register-queue), [`PATCH /settings`](/api/update-settings),
@ -538,17 +549,20 @@ No changes; feature level used for Zulip 6.0 release.
**Feature level 154** **Feature level 154**
* [`POST /streams/{stream_id}/delete_topic`](/api/delete-topic): * [`POST /streams/{stream_id}/delete_topic`](/api/delete-topic):
When the process of deleting messages times out, a success response When the process of deleting messages times out, but successfully
with "partially_completed" result will now be returned by the server, deletes some messages in the topic (see feature level 147 for when
analogically to the `/mark_all_as_read` endpoint. this endpoint started deleting messages in batches), a success
response with `"result": "partially_completed"` will now be returned
by the server, analogically to the `POST /mark_all_as_read` endpoint
(see feature level 153 entry below).
**Feature level 153** **Feature level 153**
* [`POST /mark_all_as_read`](/api/mark-all-as-read): Messages are now * [`POST /mark_all_as_read`](/api/mark-all-as-read): Messages are now
marked as read in batches, so that progress will be made even if the marked as read in batches, so that progress will be made even if the
request times out because of an extremely large number of unread request times out because of an extremely large number of unread
messages to process. Upon timeout, a success response with a messages to process. Upon timeout, a success response with
"partially_completed" result will be returned by the server. `"result": "partially_completed"` will be returned by the server.
**Feature level 152** **Feature level 152**

View File

@ -33,7 +33,7 @@ DESKTOP_WARNING_VERSION = "5.9.3"
# Changes should be accompanied by documentation explaining what the # Changes should be accompanied by documentation explaining what the
# new level means in api_docs/changelog.md, as well as "**Changes**" # new level means in api_docs/changelog.md, as well as "**Changes**"
# entries in the endpoint's documentation in `zulip.yaml`. # entries in the endpoint's documentation in `zulip.yaml`.
API_FEATURE_LEVEL = 210 API_FEATURE_LEVEL = 211
# Bump the minor PROVISION_VERSION to indicate that folks should provision # Bump the minor PROVISION_VERSION to indicate that folks should provision
# only when going from an old version of the code to a newer version. Bump # only when going from an old version of the code to a newer version. Bump

View File

@ -1156,7 +1156,7 @@ export function delete_topic(stream_id, topic_name, failures = 0) {
topic_name, topic_name,
}, },
success(data) { success(data) {
if (data.result === "partially_completed") { if (data.complete === false) {
if (failures >= 9) { if (failures >= 9) {
// Don't keep retrying indefinitely to avoid DoSing the server. // Don't keep retrying indefinitely to avoid DoSing the server.
return; return;

View File

@ -105,10 +105,6 @@ def json_success(request: HttpRequest, data: Mapping[str, Any] = {}) -> MutableJ
return json_response(data=data) return json_response(data=data)
def json_partial_success(request: HttpRequest, data: Mapping[str, Any] = {}) -> MutableJsonResponse:
return json_response(res_type="partially_completed", data=data, status=200)
def json_response_from_error(exception: JsonableError) -> MutableJsonResponse: def json_response_from_error(exception: JsonableError) -> MutableJsonResponse:
""" """
This should only be needed in middleware; in app code, just raise. This should only be needed in middleware; in app code, just raise.

View File

@ -332,10 +332,11 @@ def get_openapi_parameters(
def get_openapi_return_values(endpoint: str, method: str) -> Dict[str, Any]: def get_openapi_return_values(endpoint: str, method: str) -> Dict[str, Any]:
operation = openapi_spec.openapi()["paths"][endpoint][method.lower()] operation = openapi_spec.openapi()["paths"][endpoint][method.lower()]
schema = operation["responses"]["200"]["content"]["application/json"]["schema"] schema = operation["responses"]["200"]["content"]["application/json"]["schema"]
# In cases where we have used oneOf, the schemas only differ in examples # We do not currently have documented endpoints that have multiple schemas
# So we can choose any. # ("oneOf", "anyOf", "allOf") for success ("200") responses. If this changes,
if "oneOf" in schema: # then the assertion below will need to be removed, and this function updated
schema = schema["oneOf"][0] # so that endpoint responses will be rendered as expected.
assert "properties" in schema
return schema["properties"] return schema["properties"]
@ -390,7 +391,7 @@ def validate_against_openapi_schema(
return True return True
# Check if the response matches its code # Check if the response matches its code
if status_code.startswith("2") and ( if status_code.startswith("2") and (
content.get("result", "success").lower() not in ["success", "partially_completed"] content.get("result", "success").lower() != "success"
): # nocoverage ): # nocoverage
raise SchemaError("Response is not 200 but is validating against 200 schema") raise SchemaError("Response is not 200 but is validating against 200 schema")
# Code is not declared but appears in various 400 responses. If # Code is not declared but appears in various 400 responses. If

View File

@ -4829,38 +4829,50 @@ paths:
description: | description: |
Marks all of the current user's unread messages as read. Marks all of the current user's unread messages as read.
**Changes**: Before Zulip 6.0 (feature level 153), this Because this endpoint marks messages as read in batches, it is possible
request did a single atomic operation, which could time out for the request to time out after only marking some messages as read.
with 10,000s of unread messages to mark as read. When this happens, the `complete` boolean field in the success response
will be `false`. Clients should repeat the request when handling such a
response. If all messages were marked as read, then the success response
will return `"complete": true`.
It now marks messages as read in batches, starting with the **Changes**: Before Zulip 8.0 (feature level 211), if the server's
newest messages, so that progress will be made even if the processing was interrupted by a timeout, but some messages were marked
request times out. as read, then it would return `"result": "partially_completed"`, along
with a `code` field for an error string, in the success response to
indicate that there was a timeout and that the client should repeat the
request.
If the server's processing is interrupted by a timeout, it Before Zulip 6.0 (feature level 153), this request did a single atomic
will return an HTTP 200 success response with result operation, which could time out with 10,000s of unread messages to mark
"partially_completed". A correct client should repeat the as read. As of this feature level, messages are marked as read in
request when handling such a response. batches, starting with the newest messages, so that progress is made
even if the request times out. And, instead of returning an error when
the request times out and some messages have been marked as read, a
success response with `"result": "partially_completed"` is returned.
responses: responses:
"200": "200":
description: Success or partial success. description: Success.
content: content:
application/json: application/json:
schema: schema:
oneOf: allOf:
- $ref: "#/components/schemas/JsonSuccess" - $ref: "#/components/schemas/JsonSuccessBase"
- allOf: - additionalProperties: false
- $ref: "#/components/schemas/PartiallyCompleted" required:
- example: - complete
{ properties:
"code": "REQUEST_TIMEOUT", result: {}
"msg": "", msg: {}
"result": "partially_completed", ignored_parameters_unsupported: {}
} complete:
type: boolean
description: | description: |
If the request exceeds its processing time limit after having Whether all unread messages were marked as read.
successfully marked some messages as read, response code 200
with result "partially_completed" and code "REQUEST_TIMEOUT" will be returned like this: Will be `false` if the request successfully marked
some, but not all, messages as read.
example: {"msg": "", "result": "success", "complete": true}
/mark_stream_as_read: /mark_stream_as_read:
post: post:
operationId: mark-stream-as-read operationId: mark-stream-as-read
@ -16247,19 +16259,34 @@ paths:
description: | description: |
Delete all messages in a topic. Delete all messages in a topic.
Topics are a field on messages (not an independent Topics are a field on messages (not an independent data structure), so
data structure), so deleting all the messages in the topic deleting all the messages in the topic deletes the topic from Zulip.
deletes the topic from Zulip.
**Changes**: As of Zulip 6.0 (feature level 154), in case Because this endpoint deletes messages in batches, it is possible for
of timeout, a success response with "partially_completed" the request to time out after only deleting some messages in the topic.
result will now be returned. When this happens, the `complete` boolean field in the success response
will be `false`. Clients should repeat the request when handling such a
response. If all messages in the topic were deleted, then the success
response will return `"complete": true`.
Before Zulip 6.0 (feature level 147), this request did a **Changes**: Before Zulip 8.0 (feature level 211), if the server's
single atomic operation, which could time out for very large processing was interrupted by a timeout, but some messages in the topic
topics. It now deletes messages in batches, starting with were deleted, then it would return `"result": "partially_completed"`,
the newest messages, so that progress will be made even if along with a `code` field for an error string, in the success response
the request times out. to indicate that there was a timeout and that the client should repeat
the request.
As of Zulip 6.0 (feature level 154), instead of returning an error
response when a request times out after successfully deleting some of
the messages in the topic, a success response is returned with
`"result": "partially_completed"` to indicate that some messages were
deleted.
Before Zulip 6.0 (feature level 147), this request did a single atomic
operation, which could time out for very large topics. As of this
feature level, messages are deleted in batches, starting with the newest
messages, so that progress is made even if the request times out and
returns an error.
x-requires-administrator: true x-requires-administrator: true
parameters: parameters:
- $ref: "#/components/parameters/StreamIdInPath" - $ref: "#/components/parameters/StreamIdInPath"
@ -16273,24 +16300,27 @@ paths:
required: true required: true
responses: responses:
"200": "200":
description: Success or partial success. description: Success.
content: content:
application/json: application/json:
schema: schema:
oneOf: allOf:
- $ref: "#/components/schemas/JsonSuccess" - $ref: "#/components/schemas/JsonSuccessBase"
- allOf: - additionalProperties: false
- $ref: "#/components/schemas/PartiallyCompleted" required:
- example: - complete
{ properties:
"code": "REQUEST_TIMEOUT", result: {}
"msg": "", msg: {}
"result": "partially_completed", ignored_parameters_unsupported: {}
} complete:
type: boolean
description: | description: |
If the request exceeds its processing time limit after having Whether all unread messages were marked as read.
successfully marked some messages as read, response code 200
with result "partially_completed" and code "REQUEST_TIMEOUT" will be returned like this: Will be `false` if the request successfully marked
some, but not all, messages as read.
example: {"msg": "", "result": "success", "complete": true}
"400": "400":
description: Bad request. description: Bad request.
content: content:

View File

@ -66,7 +66,8 @@ class FirstUnreadAnchorTests(ZulipTestCase):
# Mark all existing messages as read # Mark all existing messages as read
with timeout_mock("zerver.views.message_flags"): with timeout_mock("zerver.views.message_flags"):
result = self.client_post("/json/mark_all_as_read") result = self.client_post("/json/mark_all_as_read")
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
# Send a new message (this will be unread) # Send a new message (this will be unread)
new_message_id = self.send_stream_message(self.example_user("othello"), "Verona", "test") new_message_id = self.send_stream_message(self.example_user("othello"), "Verona", "test")
@ -126,7 +127,8 @@ class FirstUnreadAnchorTests(ZulipTestCase):
with timeout_mock("zerver.views.message_flags"): with timeout_mock("zerver.views.message_flags"):
result = self.client_post("/json/mark_all_as_read") result = self.client_post("/json/mark_all_as_read")
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
new_message_id = self.send_stream_message(self.example_user("othello"), "Verona", "test") new_message_id = self.send_stream_message(self.example_user("othello"), "Verona", "test")
@ -674,7 +676,8 @@ class MarkAllAsReadEndpointTest(ZulipTestCase):
self.assertNotEqual(unread_count, 0) self.assertNotEqual(unread_count, 0)
with timeout_mock("zerver.views.message_flags"): with timeout_mock("zerver.views.message_flags"):
result = self.client_post("/json/mark_all_as_read", {}) result = self.client_post("/json/mark_all_as_read", {})
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
new_unread_count = ( new_unread_count = (
UserMessage.objects.filter(user_profile=hamlet) UserMessage.objects.filter(user_profile=hamlet)
@ -687,12 +690,8 @@ class MarkAllAsReadEndpointTest(ZulipTestCase):
self.login("hamlet") self.login("hamlet")
with mock.patch("zerver.views.message_flags.timeout", side_effect=TimeoutExpiredError): with mock.patch("zerver.views.message_flags.timeout", side_effect=TimeoutExpiredError):
result = self.client_post("/json/mark_all_as_read", {}) result = self.client_post("/json/mark_all_as_read", {})
self.assertEqual(result.status_code, 200) result_dict = self.assert_json_success(result)
self.assertFalse(result_dict["complete"])
result_dict = orjson.loads(result.content)
self.assertEqual(
result_dict, {"result": "partially_completed", "msg": "", "code": "REQUEST_TIMEOUT"}
)
class GetUnreadMsgsTest(ZulipTestCase): class GetUnreadMsgsTest(ZulipTestCase):

View File

@ -1,6 +1,5 @@
from unittest import mock from unittest import mock
import orjson
from django.utils.timezone import now as timezone_now from django.utils.timezone import now as timezone_now
from zerver.actions.streams import do_change_stream_permission from zerver.actions.streams import do_change_stream_permission
@ -264,7 +263,8 @@ class TopicDeleteTest(ZulipTestCase):
"topic_name": topic_name, "topic_name": topic_name,
}, },
) )
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
self.assertTrue(Message.objects.filter(id=last_msg_id).exists()) self.assertTrue(Message.objects.filter(id=last_msg_id).exists())
# Try to delete all messages in the topic again. There are no messages accessible # Try to delete all messages in the topic again. There are no messages accessible
@ -275,7 +275,8 @@ class TopicDeleteTest(ZulipTestCase):
"topic_name": topic_name, "topic_name": topic_name,
}, },
) )
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
self.assertTrue(Message.objects.filter(id=last_msg_id).exists()) self.assertTrue(Message.objects.filter(id=last_msg_id).exists())
# Make the stream's history public to subscribers # Make the stream's history public to subscribers
@ -294,7 +295,8 @@ class TopicDeleteTest(ZulipTestCase):
"topic_name": topic_name, "topic_name": topic_name,
}, },
) )
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
self.assertFalse(Message.objects.filter(id=last_msg_id).exists()) self.assertFalse(Message.objects.filter(id=last_msg_id).exists())
self.assertTrue(Message.objects.filter(id=initial_last_msg_id).exists()) self.assertTrue(Message.objects.filter(id=initial_last_msg_id).exists())
@ -306,7 +308,8 @@ class TopicDeleteTest(ZulipTestCase):
"topic_name": topic_name, "topic_name": topic_name,
}, },
) )
self.assert_json_success(result) result_dict = self.assert_json_success(result)
self.assertTrue(result_dict["complete"])
self.assertFalse(Message.objects.filter(id=last_msg_id).exists()) self.assertFalse(Message.objects.filter(id=last_msg_id).exists())
self.assertTrue(Message.objects.filter(id=initial_last_msg_id).exists()) self.assertTrue(Message.objects.filter(id=initial_last_msg_id).exists())
@ -329,9 +332,5 @@ class TopicDeleteTest(ZulipTestCase):
"topic_name": topic_name, "topic_name": topic_name,
}, },
) )
self.assertEqual(result.status_code, 200) result_dict = self.assert_json_success(result)
self.assertFalse(result_dict["complete"])
result_dict = orjson.loads(result.content)
self.assertEqual(
result_dict, {"result": "partially_completed", "msg": "", "code": "REQUEST_TIMEOUT"}
)

View File

@ -8,7 +8,7 @@ from zerver.actions.message_flags import (
do_mark_stream_messages_as_read, do_mark_stream_messages_as_read,
do_update_message_flags, do_update_message_flags,
) )
from zerver.lib.exceptions import ErrorCode, JsonableError from zerver.lib.exceptions import JsonableError
from zerver.lib.narrow import ( from zerver.lib.narrow import (
OptionalNarrowListT, OptionalNarrowListT,
fetch_messages, fetch_messages,
@ -16,7 +16,7 @@ from zerver.lib.narrow import (
parse_anchor_value, parse_anchor_value,
) )
from zerver.lib.request import REQ, RequestNotes, has_request_variables from zerver.lib.request import REQ, RequestNotes, has_request_variables
from zerver.lib.response import json_partial_success, json_success from zerver.lib.response import json_success
from zerver.lib.streams import access_stream_by_id from zerver.lib.streams import access_stream_by_id
from zerver.lib.timeout import TimeoutExpiredError, timeout from zerver.lib.timeout import TimeoutExpiredError, timeout
from zerver.lib.topic import user_message_exists_for_topic from zerver.lib.topic import user_message_exists_for_topic
@ -123,13 +123,13 @@ def mark_all_as_read(request: HttpRequest, user_profile: UserProfile) -> HttpRes
try: try:
count = timeout(50, lambda: do_mark_all_as_read(user_profile)) count = timeout(50, lambda: do_mark_all_as_read(user_profile))
except TimeoutExpiredError: except TimeoutExpiredError:
return json_partial_success(request, data={"code": ErrorCode.REQUEST_TIMEOUT.name}) return json_success(request, data={"complete": False})
log_data_str = f"[{count} updated]" log_data_str = f"[{count} updated]"
assert request_notes.log_data is not None assert request_notes.log_data is not None
request_notes.log_data["extra"] = log_data_str request_notes.log_data["extra"] = log_data_str
return json_success(request) return json_success(request, data={"complete": True})
@has_request_variables @has_request_variables

View File

@ -48,14 +48,13 @@ from zerver.decorator import (
) )
from zerver.lib.default_streams import get_default_stream_ids_for_realm from zerver.lib.default_streams import get_default_stream_ids_for_realm
from zerver.lib.exceptions import ( from zerver.lib.exceptions import (
ErrorCode,
JsonableError, JsonableError,
OrganizationOwnerRequiredError, OrganizationOwnerRequiredError,
ResourceNotFoundError, ResourceNotFoundError,
) )
from zerver.lib.mention import MentionBackend, silent_mention_syntax_for_user from zerver.lib.mention import MentionBackend, silent_mention_syntax_for_user
from zerver.lib.request import REQ, has_request_variables from zerver.lib.request import REQ, has_request_variables
from zerver.lib.response import json_partial_success, json_success from zerver.lib.response import json_success
from zerver.lib.retention import STREAM_MESSAGE_BATCH_SIZE as RETENTION_STREAM_MESSAGE_BATCH_SIZE from zerver.lib.retention import STREAM_MESSAGE_BATCH_SIZE as RETENTION_STREAM_MESSAGE_BATCH_SIZE
from zerver.lib.retention import parse_message_retention_days from zerver.lib.retention import parse_message_retention_days
from zerver.lib.stream_traffic import get_streams_traffic from zerver.lib.stream_traffic import get_streams_traffic
@ -965,9 +964,9 @@ def delete_in_topic(
try: try:
timeout(50, delete_in_batches) timeout(50, delete_in_batches)
except TimeoutExpiredError: except TimeoutExpiredError:
return json_partial_success(request, data={"code": ErrorCode.REQUEST_TIMEOUT.name}) return json_success(request, data={"complete": False})
return json_success(request) return json_success(request, data={"complete": True})
@require_post @require_post