he possibility for it being null was likely an oversight -- it should
have been removed after the early migrations to backfill the field
when it was added.
We've confirmed there are no existing violations of this invariant in
Zulip Cloud.
This is a natural follow-up to
93e8740218 - invitations sent by users
deactivated before the commit still need to be revoked, via a
migration.
The logic for finding the Confirmations to deactivated is based on
get_valid_invite_confirmations_generated_by_user in actions.py.
Co-authored-by: Steve Howell <showell@zulip.com>
Co-authored-by: Tim Abbott <tabbott@zulip.com>
This commit adds the backend functionality to
mark messages as unread through update_message_flags
with `unread` flag and `remove` operation.
We also manage incoming events in the webapp.
Tweaked by tabbott to simplify the implementation and add an API
feature level update to the documentation.
This commit was originally drafted by showell, and showell
also finalized the changes. Many thanks to Suyash here for
the main work here, which was to get all the tests and
documentation work moving forward.
This commit creates a new TypedDict RealmPlaygroundDict for realm
playground objects. Now the list of playgrounds in the events sent
to clients and the "added_playground" field of RealmAuditLog entry
use RealmPlaygroundDict instead of Dict.
This commit modifies the notify_realm_playgrounds function to accept
realm_playgrounds as argument from the caller instead of computing it
in the function to avoid duplicate queries since the realm playgrounds
list will be required in its caller functions as well in further commits.
Clearing the sessions inside the transaction makes Zulip vulnerable to
a narrow window where the deleted session has not yet been committed,
but has been removed from the memcached cache. During this window, a
request with the session-id which has just been deleted can
successfully re-fill the memcached cache, as the in-database delete is
not yet committed, and thus not yet visible. After the delete
transaction commits, the cache will be left with a cached session,
which allows further site access until it expires (after
SESSION_COOKIE_AGE seconds), is ejected from the cache due to memory
pressure, or the server is upgraded.
Move the session deletion outside of the transaction.
Because the testsuite runs inside of a transaction, it is impossible
to test this is CI; the testsuite uses the non-caching
`django.contrib.sessions.backends.db` backend, regardless. The test
added in this commit thus does not fail before this commit; it is
merely a base expression that the session should be deleted somehow,
and does not exercise the assert added in the previous commit.
Commit ab8aae6d0c (#12161) incorrectly
assumed that ‘new’ is a string. In the case of change == "links",
it’s a dict.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
These error messages weren't marked for translation.
DEACTIVATED_ACCOUNT_ERROR and PASSWORD_TOO_WEAK_ERROR are used in
several places and imported, so we can't move them to be in-line errors
and we keep them at top-level, marked with gettext_lazy.
Using mark_safe on errors with content in them taken from user-input is
a clearly bad idea. With that said, this code
was not exploitable in the current state, given that username is a value
you have to POST to /login/, and the endpoint is CSRF-protected.
We also remove use of mark_safe from the errors without user input them,
but that are just plaintext and thus don't need it.
Adds documentation for admins to manage users via the user profile
modal for these actions:
- Deactivating a user
- Changing a user's role
- Changing a user's name
Creates two new tab sections because we still want to document
the ability to do these actions through the users section in
the organizational settings modal.
Also cleans up some text in the help center article for changing
a user's role.
Fixes#21318.
Fixes#21415.
Adds content on user group permissions / management to the general
help center article for user groups (`/help/user-groups`) and
removes the then redundant `/help/restrict-user-group-management`
article.
Redirects links in help center and api documentation from deleted
article to the new configure user group settings section of
`/help/user-groups`.
Fixes#21383.
This commit adds a cron job which runs every hour to add the users to
full members system group if user is promoted to a full member.
This should ensure that full member status is available no more than
an hour after configuration suggests it should be.
There can be cases when system groups data is not present while
importing, like when importing from other products, so this
commit adds code to create system user groups and add users to
it according to their role.
This commit adds users to the appropriate system user group
based on their role. We also change the user groups when
changing role of the user.
We also add migration to add existing users to the appropriate
user groups.
This commit adds update_users_in_full_members_system_group which
is currently used to update the full members group on changing
role of a user. This function will be modified in next commit such
that it can be used to update full members group on changing
waiting_period_threshold setting of realm.
We pass list of user ids instead of user profile objects to
remove_members_from_user_group. We still need to call user_id_to_users
in the views function instead of directly passing the ids to
remove_members_from_user_group to make sure we check whether all
ids are valid or not.
We pass list of user ids instead of user profile objects to
bulk_add_members_to_user_group. We still need to call user_id_to_users
in the views function instead of directly passing the ids to
bulk_add_members_to_user_group to make sure we check whether all
ids are valid or not.
Previous behavior was logging only the uuid if it was provided by the
remote server, but that's insufficient, because the user may actually
have no devices registered with uuis and we (at the bouncer) end up
sending notifications to id-based registrations. Not having that id
logged makes it impossible to figure out what's going on.
Fixes#18017.
In previous commits, the change to the bouncer API was introduced to
support this and then a series of migrations added .uuid to
UserProfiles.
Now the code for self-hosted servers that makes requests
to the bouncer is changed to make use of it.
This is in a separate commit to make deployment easier. It ensures that
this is only marked non-null after the backfill migration (backfilling
.uuid for all old UserProfiles) runs - which was added in the previous
commit.
This is the first step to making the full switch to self-hosted servers
use user uuids, per issue #18017. The old id format is still supported
of course, for backward compatibility.
This commit is separate in order to allow deploying *just* the bouncer
API change to production first.
Previously, this URL just returned the `raw_content` field. It seems
cleanest to just make it a single-message variant of GET /messages,
deprecating the only format.
This will make it convenient to add a handful of organizations to the
beta of this feature during its first few weeks to try to catch bugs,
before we open it to everyone in Zulip Cloud.
The correct return type of get_realm_domains should
be List[Dict[str, Union[bool, str]]] instead of
List[Dict[str, str]] because allowed_subdomains is
a bool field not str.
A user can subscribe to a stream and sometimes (depending
on stream permissions) see messages from the stream
that were sent before they subscribed, and that user
won't have a UserMessage row for that message.
In order to do things like star a message, we need
to create UserMessage records on the fly.
In the past we wisely constrained this logic to the
specific use cases. But I think we can generalize
the logic now. For example, we are now building a
feature to mark messages as unread, and it motivates
the same need to auto-create UserMessage rows.
So now we handle this in a more generalized fashion.
Refactors and cleans up the shared `MessagesBase` schema in
the OpenAPI so that it accurately reflects the general base
for message objects for endpoints that use it as a reference.
A follow-up to adding `edit_history` as a property of message
objects. And a prepartory commit for `GET /messages/{msg_id}`
to return not only the raw content of the message but also
the message object.
When removing notifications, we skip the access control on if the user
still can read them -- they should not have a notification of them,
both because they currently cannot read the message, as well as
because they have already done so.
Translators found it confusing, since it's not at all obvious that the
word "quote" should not be translated.
I'm not altogether happy with the code formatting for this.
While we're changing this, also standardize on the "```` quote" style
of quote blocks to ensure code/quote blocks in stream descriptions are
unlikely to conflict with this syntax.
In steady-state, requests to FCM take about a second; however, in
cases where the remote FCM server is unstable, the request has been
observed to block for more than a minute.
As noted in the previous commit, pushes must complete within 30s;
fail fast, and let the retries and exponential backoff handle errors.
The worst-case total time taken with timeouts and errors for an FCM
notification is 19.5s. Unfortunately, `aioapns` does not appear to
have any timeouts, and thus this commit cannot guarantee a total of
fewer than 30s.
This reverts bc15085098 (which provided
not justification for its change) and moves further, down to 2 retries
from the default of 5.
10 retries, with exponential backoff, is equivalent to sleeping 2^11
seconds, or just about 34 minutes (though the code uses a jitter which
may make this up to 51 minutes). This is an unreasonable amount of
time to spend in this codepath -- as only one worker is used, and it
is single-threaded, this could effectively block all missed message
notifications for half an hour or longer.
This is also necessary because messages sent through the push bouncer
are sent synchronously; the sending server uses a 30-second timeout,
set in PushBouncerSession. Having retries which linger longer than
this can cause duplicate messages; the sending server will time out
and re-queue the message in RabbitMQ, while the push bouncer's request
will continue, and may succeed.
Limit to 2 retries (APNS currently uses 3), and results expected max
of 4 seconds of sleep, potentially up to 6. If this fails, there
exists another retry loop above it, at the RabbitMQ layer (either
locally, or via the remote server's queue), which will result in up to
3 additional retries -- all told, the request will me made to FCM up
to 12 times.
This model is by designed intended to exist on a 1:1 relationship with
Realms, and we attempt to ensure that with application code, but we
should have a unique constraint too, since a database with duplicate
such entries would be corrupted.
We do this via the standard Django OneToOneField.
It was there to work around https://bugs.python.org/issue17519. This
workaround with del seems like a partial improvement.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We want TypedDicts that have actual teeth.
In order to make type checks meaningful, we want
to avoid Any, object, or crazy Union types when
we aggregate each type of message, so we replaced
a generic function with three concrete functions.
Provide stream privacy and description in stream notification events
when stream is created.
In function "send_messages_for_new_subscribers" for when stream is
created, put policy name and description of the stream.
Fixes#21004
Removes `LEGACY_PREV_TOPIC` which is no longer needed due to the
message edit history migration.
Also remove additions to the linter exclude list that were added
earlier in this commit series.
Since we've changed the database to contain these new fields, we just
need to stop dropping them in the API code.
This also changes the public API to match the database format again
by removing `prev_subject` from edit history API.
Adds an API changelog feature update for the renamed `prev_subject`
field (to `prev_topic`) and new fields (`topic` and `stream`)
in the message `edit_history`.
Also, documents said `edit_history` in the `MessagesBase` schema
in the api documentation, which is used by the `/get-messages`,
`/get-events` and `/zulip-outgoing-webhooks` endpoints.
Fixes#21076.
Co-authored-by: Lauryn Menard <lauryn.menard@gmail.com>
Now that we have code to support reading prev_topic, this is no longer
necessary.
We'll want to deploy this change to production before running the
migration to remove prev_subject from edit history entries, so that
prev_subject can be fully purged from the database.
We modify the message_edit_history marshalling code so that this
commit does not change the API, since we haven't backfilled the data
yet.
FormattedEditHistoryEvent, introduced in the previous commit, doesn't
directly inherit fields from EditHistoryEvent, so no changes are
required there.
We fix the mutation of caller and other bad patterns, as well as
adding explicit typing to make the code readable.
We also update the OpenAPI documentation for previously
undocumented `prev_strem` field in the `/get-message-history`
endpoint for API validation testing.
Co-authored-by: Lauryn Menard <lauryn.menard@gmail.com>
These types will help make iteration on this code easier.
Note that `user_id` can be null due to the fact that
edit history entries before March 2017 did not log
the user that made the edit, which was years after
supporting topic edits (discovered in test deployment
of migration on chat.zulip.org).
Co-authored-by: Lauryn Menard <lauryn.menard@gmail.com>
Moves the encodeHashComponent and decodeHashComponent functions out of
hash_util and into internal_url which belongs to shared. This is to
accommodate sharing of this code with mobile or any other codebases that
do not wish to duplicate logic.
TestMaybeSendToRegistration needs tweaking here, because it wasn't
setting the subdomain for the dummy request, so
maybe_send_to_registration was actually running with realm=None, which
is not right for these tests.
Also, test_sso_only_when_preregistration_user_exists was creating
PreregistrationUser without setting the realm, which was also incorrect.
create_preregistration_user is a footgun, because it takes the realm
from the request. The calling code is supposed to validate that
registration for the realm is allowed
first, but can sometimes do that on "realm" taken from something else
than the request - and later on calls create_preregistration_user, thus
leading to prereg user creation on unvalidated request.realm.
It's safer, and makes more sense, for this function to take the intended
realm as argument, instead of taking the entire request. It follows that
the same should be done for prepare_activation_url.
In these tests, the code ends up with a logged in session when it's
undesired - later on these tests make requests to a different subdomain
- where a logged in session is not supposed to exist. This leads to an
unintended, strange situation where request.user is a user from the old
subdomain but the request itself is to a *different* subdomain. This
throws off get_realm_from_request, which will return the realm from
request.user.realm - which is not what these tests want and can lead to
these tests failing when some of the production code being tested
switches to using get_realm_from_request instead of
get_realm(get_subdomain).
The codepaths for joining an organization via a multi-use invitation
(accounts_home_from_multiuse_invite and maybe_send_to_registration)
weren't validating whether
the organization the invite was generated for matches the organization
the user attempts to join - potentially allowing an attacker with access
to organization A to generate a multi-use invite and use it to join
organization B within the same deployment, that they shouldn't have
access to.
The database value for expiry_date is None for the invite
that will never expire and the clients send -1 as value
in the API similar to the message retention setting.
Also, when passing invite_expire_in_days as an argument
in various functions, invite_expire_in_days is passed as
-1 for "Never expires" option since invite_expire_in_days
is an optional argument in some functions and thus we cannot
pass "None" value.
A migration should not import zerver.lib.streams at all, but this
solves the immediate problem with check-database-compatibility.py.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Removes `client` parameter from backend tests using the
`POST /messages` endpoint when the test can use the default
`User-Agent` as the client, which is set to `ZulipMobile` for API
requests and a browser user agent string for web app requests.
Various tests use the `PATCH /stream/{stream_id} endpoint in
`test_subs.py`. Because the stream id is in the URL path, it
does not also need to be passed as a query parameter.
Removes instances of `stream_name` being passed as a query
parameter to tests.
Removes `topic_name` parameter in `test_message_flags.py`
where is being passed to a test for marking a stream as
read because it is an ignored parameter for that endpoint.
This was only used for upgrading from Zulip < 1.9.0, which is no
longer possible because Zulip < 2.1.0 had no common supported
platforms with current main.
If we ever want this optimization for a future migration, it would be
better implemented using Django merge migrations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
After failing to notice a place where we wanted to hide timezone
information, we decided to add timezones to some of the test
users, so that we can better consider the effects of timezones
when manually testing.
Testing:
* ran populate_db and confirmed users had timezones in the UI
* updated test_populate_db.py
Adds `realm_web_public_access_enabled` as a realm-specific server
setting potentially returned by the `/get-server-settings` endpoint
so that clients that support browsing of web-public stream content
without an account can generate a login page that supports that
type of access.
Various backend tests use the `PATCH /messages/{msg_id}` endpoint.
For that endpoint, the message ID is encoded in the URL path and
ignored if provided as a parameter in the the query.
Verified that the tests were providing the same message ID to both
the path and then removed the ignored parameter in the query.
This commit refactors get_user_by_email function
to use access_user_by_email which is similar to
already existing access_user_by_id and thus using
get_user_data function added recently.
We also remove the unnecessary check for email as
email will always be passed to this endpoint.
Preparatory commit for #10970.
This commit adds get_user_data which is called by
get_members_backend to compute the client_gravatar
value and then return the data of a single user or
all accessible users.
This function will also be used by get_user_by_email
in further commtis.
When pulling batches out of the ScheduledEmail list in a single
transaction, an unexpected failure to send an email will result in the
whole batch getting retried. This will result in infinite email
sending loops.
Pull a single row off at a time and send it. We continue without
retries to the next email on EmailNotDeliveredException, but will
retry infinitely on other exceptions.
Fixes: #20943.
Putting all of the logic in a `finally` block is equivalent to a bare
`except` block, which silently consumes all exceptions.
Move only the most-necessary parts into the except; this lets
`BadImageError` exceptions from `zerver/lib/upload.py` to escape,
allowing better the generic "Image file upload failed" to be replaced
with a more specific message.
It also allows unexpected exceptions, as the previous commit resolved,
to escape and 500. This lets them be detected and resolved, rather
than give users a silently bad experience.
5dab6e9d31 began honoring the list of disposals for every frame.
Unfortunately, passing a list of disposals for a non-animated image
raises an exception:
```
File "zerver/lib/upload.py", line 212, in resize_emoji
image_data = resize_gif(im, size)
File "zerver/lib/upload.py", line 165, in resize_gif
frames[0].save(
File "[...]/PIL/Image.py", line 2212, in save
save_handler(self, fp, filename)
File "[...]/PIL/GifImagePlugin.py", line 605, in _save
_write_single_frame(im, fp, palette)
File "[...]/PIL/GifImagePlugin.py", line 506, in _write_single_frame
_write_local_header(fp, im, (0, 0), flags)
File "[...]/PIL/GifImagePlugin.py", line 647, in _write_local_header
disposal = int(im.encoderinfo.get("disposal", 0))
TypeError: int() argument must be a string, a bytes-like object or a
number, not 'list'
```
`check_add_realm_emoji` calls this as:
```
try:
is_animated = upload_emoji_image(image_file, emoji_file_name, a
uthor)
emoji_uploaded_successfully = True
finally:
if not emoji_uploaded_successfully:
realm_emoji.delete()
return None
# ...
```
This is equivalent to dropping _all_ exceptions silently. As such,
Zulip has silently rejected all non-animated images larger than 64x64
since 5dab6e9d31.
Adjust to only pass a single disposal if there are no additional
frames. Add a test for non-animated images, which requires also
fixing the incidental bug that all GIF images were being recorded as
animated, regardless of if they had more than 1 frame or not.
The deferred_work events can't be processed in the test suite
environment, since it tries to make requests to the host <realm.uri>
which is "zulip.testserver" and obviously not going to work. Since the
test suite base data set has no animated emoji, there's nothing to do,
and we can skip this step.
This code only runs when applying the migration to an already
provisioned test db - because during an initial db set up there are no
realms yet, so no events get pushed by the migration.
The previous random strategy for picking 5 emails would result in
collisions in 1 out of 10000.35 tests, leading to
psycopg2.errors.UniqueViolation.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For aliases that will no longer be listed, see the third column of
grep '^L ' zulip-py3-venv/lib/python3.*/site-packages/pytz/zoneinfo/tzdata.zi
Time zones previously set to an alias will be canonicalized on demand.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
A recent Postgres upstream release appears to have broken PGroonga.
While we wait for https://github.com/pgroonga/pgroonga/issues/203 to
be resolved, disable PGroonga in our automated tests so that Zulip
CI passes.
Sometimes we may get data to import, due to export bugs, malformed data
etc., which doesn't have the invariant of RealmEmoji.author always being
set. The import code should fix that, by choosing a reasonable default
and setting it.
There doesn’t seem to be a reason to override this, and the upstream
method it was based on has diverged since this was written.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Although our NonClosingPool prevents the SQLAlchemy connection from
closing the underlying Django connection, we still want to properly
dispose of the associated SQLAlchemy structures.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes these warnings with SQLALCHEMY_WARN_20=1:
RemovedIn20Warning: Using non-integer/slice indices on Row is
deprecated and will be removed in version 2.0; please use
row._mapping[<key>], or the mappings() accessor on the Result
object. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
RemovedIn20Warning: Using the 'in' operator to test for string or
column keys, or integer indexes, in a :class:`.Row` object is
deprecated and will be removed in a future release. Use the
`Row._fields` or `Row._mapping` attribute, i.e. 'key in
row._fields' (Background on SQLAlchemy 2.0 at:
https://sqlalche.me/e/b8d9)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes this warning with SQLALCHEMY_WARN_20=1:
RemovedIn20Warning: The legacy calling style of select() is deprecated
and will be removed in SQLAlchemy 2.0. Please use the new calling
style described at select(). (Background on SQLAlchemy 2.0 at:
https://sqlalche.me/e/b8d9)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “SADeprecationWarning: The Select.column() method is deprecated
and will be removed in a future release. Please use
Select.add_columns() (deprecated since: 1.4)”.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “SADeprecationWarning: Implicit coercion of SELECT and textual
SELECT constructs into FROM clauses is deprecated; please call
.subquery() on any Core select or ORM Query object in order to produce
a subquery object.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
match_querystring is irrelevant in these cases. Fixes this warning:
/srv/zulip-py3-venv/lib/python3.7/site-packages/responses/__init__.py:340:
DeprecationWarning: Argument 'match_querystring' is deprecated. Use
'responses.matchers.query_param_matcher' or
'responses.matchers.query_string_matcher'
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The tool needs to run this function, since it uses django's send_email
directly instead of going through our zerver.lib.send_email.send_email
codepath.
The message ID is encoded in the URL, not the PATCH parameters, so
this argument was ignored. I verified that it appears to have always
matched the value present in the URL.
The new logic better matches reasonable user expectations, that if you
move all the messages, that's a whole-topic move, regardless of which
propagation mode you selected.
When moving only part of a topic, it's useful to display that
information to users in these notifications so that it's clear what's
happening.
The most important consequence is actually just increasing confidence
that when you see that the whole topic was moved, that's accurate.
Substantially modified by tabbott.
Fixes#20575.
To avoid an uncaught IntegrityError causing a 500 HTTP response in a
race between two processes trying to mute a topic, we catch the
integrity error and raise the error exception with status 400 we'd
have gotten if the second request had been a bit later.
Fixes#21011.
The S3 backend implementation of upload_emoji_image was accessing
emoji_file.name - which is redundant because emoji_file_name already
gets passed in and can be used, and an object of type IO[bytes] may not
have the .name attribute. Spotted by @Fingel.
Fixes#20132.
EMAIL_HOST_USER without EMAIL_HOST_PASSWORD is not going to be a valid
configuration, and may result from making mistake in correctly setting
it in the secrets file and end up being a non-obvious cause of failure
to send email. Logging an error will be useful for detecting it. Further
conditions can be added to the function in the future.
Calling `get_apns_context` opens (and caches) an open connection to
the APNs servers. Since `apns_enabled` is called from Django
codepaths, this means that the Django processes hold unnecessary
connections open to the APNs servers.
Switch `apns_enabled` to checking what `get_apns_context` checks when
we're just returning True/False.
aioapns 2.1 removed the loop parameter from the aioapns.APNs
constructor, because Python 3.10 removed the loop parameter from the
asyncio.Lock constructor.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The new release adds the commit:
20ac22b96d
Which allows us to get rid of the entire ugly override that was needed
to do this commit's job in our code. What we do here in this commit:
* Use django-scim2 0.17.1
* Revert the relevant parts of f5a65846a8
* Adjust the expected error message in test_exception_details_not_revealed_to_client
since the message thrown by django-scim2 in this release is slightly
different.
We do not have to add anything to set EXPOSE_SCIM_EXCEPTIONS, since
django-scim2 uses False as the default, which is what we want - and we
have the aforementioned test verifying that indeed information doesn't
get revealed to the SCIM client.
I incorrectly removed this when simplifying
dbddbee5a115b9352862cb13d4c66820865c30b6; while that commit did not
require the hunk re-added here, the later commit
3be622ffa7 added a call that did require it.
Adds request as a parameter to json_success as a refactor towards
making `ignored_parameters_unsupported` functionality available
for all API endpoints.
Also, removes any data parameters that are an empty dict or
a dict with the generic success response values.
As a preparatory step to refactoring json_success to accept
request as a parameter, change `do_report_error`, which is
called from the events queue for "error_reports", to return
None instead of json_success.
Adds an assertion error to `ErrorReporter` queue processor
and removes `JsonableError` from `do_report_error`.
It is likely that `do_error_report` was moved from a view in a
previous refactor, but was not updated to no longer return an
HttpReponse.
As a preparatory step to refactoring json_success to accept
request as a parameter, update helper function `compose_views`
in `views.streams.py` to return the response data and call
json_success from view functions that utilize `compose_views`.
Also, updates related test in `zerver.tests.test_subs.py`.
As a preparatory step to refactoring json_success to accept
request as a parameter, change interface of helper functions:
`handle_deferred_message` in `views.message_send.py` and
`mute_topic` and `unmute_topic` in `views.muting.py`, so
that they return None or data for json_success.
Instead call json_sucess in the caller function, which already
has the HttpRequest as a parameter.
Adds a check for `additionalProperties: true` when there are no
properties listed in the schema.
This currently only happens in one place, but will be helpful for
deduplicating text between the `register-queue` and `get-events`
endpoints.
Wordle has recently become a thing and it uses green, yellow and white (or
black in dark mode) large square unicode characters to let people share their
gameplay. Zulip converts the white and black large square unicode characters to
emojis, but not the green and yellow ones. This causes the Wordle grid to be
misaligned when shared on Zulip.
This commit adds green and yellow large square emojis to our emoji list to fix
the problem.
Previously, we only checked mandatory_topics setting before
sending message in frontend and there was no restriction in
backend. This commit adds the check in backend also making
sure messages without topic cannot be sent through API as
well if mandatory_topics setting is set to True.
Previously, users found it annoying that the automated "Resolve topic"
notifications triggered an unread for everyone in the stream; this
discouraged some users from using the feature on older threads for
fear of being annoying. We change this to a better default, of only
users who participated in the topic (via either messages or reactions)
being eligible for the new message being unread.
We will likely want to create global and stream-level notifications
settings to control this behavior as a follow-up -- some users, like
me, might prefer the simpler "Always unread" behavior in some streams.
Note that the automated notifications that a topic was resolved will
still result in the topic being moved to the top of the left sidebar.
This would be somewhat difficult to change, since the left sidebar
algorithm just looks at the highest message ID in the topic.
Fixes#19709.
Tests added by Aman Agrawal (amanagr@zulip.com).
In English, compound adjectives should essentially always be
hyphenated. This makes them easier to parse, especially for users who
might not recognize that the words “web public” go together as a
phrase.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
There are three user settings that are integer enums:
`color_scheme`, `demote_inactive_streams` and
`desktop_icon_count_displays`.
Unlike the other user settings, these were using the `content`
keyword instead of the `schema` keyword in their definitions,
which caused them not to be rendered correctly in the api
documentation.
Changes the keyword to `schema` and fixes the indentation
for these three user settings in the two endpoints using
them.
Removes various instances of quotation marks that are not needed,
specifically looking at instances of arrays, e.g. `- "`, in the
OpenAPI documentation.
The target realm was not being passed to create_attachment in
upload_message_file implementations. This was a bug in the edge-case of
cross-realm messages - in particular, causing a bug in the email
gateway:
When an email with an attachment is sent, the message is mirrored to
Zulip with Email Gateway Bot as the message sender and uploader of the
attachment. Due to the realm not being passed to create_attachment, the
Attachment would get created with .realm being the system bot realm,
making the attachment inaccessible under some conditions due to failing
the following condition check (that's expected to pass, provided that
the .realm is set correctly):
```
if (
attachment.is_realm_public
and attachment.realm == user_profile.realm
and user_profile.can_access_public_streams()
):
# Any user in the realm can access realm-public files
return True
```
Fixes the rendering of enums to show strings with quotation marks,
while integers will continue to be rendered without quotation marks.
This allows for an empty string to be passed as an enum value and be
rendered as such in the documentation. Null will be rendered without
quotation marks, like integer values.
Makes `edit_timestamp` and `user_id` required fields for all
`update_message` events.
Adds `rendering_only` as another required field to signal if
events are only updating the rendered content of the message,
which is currently the case for adding inline url previews.
Updates `test_event.py` so that `do_update_message` and
`do_update_embedded_data` refer to the same testing schema
for `update_message` events, and therefore reflect the same
required fields for the `update_message` event.
The OpenAPI definition for `update_message` events is also
updated to reflect the required field and descriptions of
various properties are updated for the addition of the
`rendering_only` property.
Moves details about the rate limit error object and handling to
the OpenAPI documentation description for that common error.
Previously, this information was on the general rest error
handling documentation page without clear connection to the
specific rate limit error.
Fixes a typo in the changelog (feature 36) for that same error
and also fixes a misplaced colon in the description of the error
for missing request parameters.
Adds detailed definition of objects in the `subscription_data` parameter
array for the `/update-subscription-settings` endpoint.
Fixes#20825. Follow-up to #20409.
Formats and moves whether a field of an object in a request
parameter is required or optional to be in the same location
and have the same formatting as the general api parameter
documentation.
Also formats any examples within the object detailed
description to be the same as the general api parameter
documentation.
Follow up to #20409.
Adds a line break before the descriptive text for return
values and events in the api documentation in order to
help with readability of descriptions with multiple
paragraphs of descriptive text.
Adjustments made to the CSS of list items in unordered
lists to visually group the first paragraph of text
to any following paragraphs or unordered lists.
As a consequence:
• Bump minimum supported Python version to 3.7.
• Move Vagrant environment to Debian 10, which has Python 3.7.
• Move CI frontend tests to Debian 10.
• Move production build test to Debian 10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
regarding -
POST https://yourZulipDomain.zulipchat.com/api/v1/users/me/subscriptions
The definition of the "subscription" parameter didn't include full
information about the parameter. It only said that an array of objects
is passed as a parameter, and relied on description of the parameter
to explain what the object contained. I edited the definition to contain
the full information about the object.
Fixes#20824.
The change to curl_param_value_generators.py warrants a brief
explanation. Stream permission changes now generate a notification
message. Our curl example test for removing a reaction comes after
the two tests for updating the stream permission changes, thus the
hardcoded message ID in that test needs to be incremented by 2 to
account for the two notification messages that now come before it.
This is a part of #20289.