This commit verify error logs printed during testing of log_and report
function using assertLogs without printing it in test output and hence
avoiding spam.
This commit re-adds the integration for canarytokens.org, now separate
from the primary Thinkst integration.
Signed-off-by: David Wood <david@davidtw.co>
This commit fixes the Thinkst Canary integration which - based on the
schema in upstream documentation - incorrectly assumed that some fields
would always be sent, which meant that the integration would fail. In
addition, this commit adjusts support for canarytokens to only support
the canarytoken schema with Thinkst Canaries (not Thinkst's
canarytokens.org).
Signed-off-by: David Wood <david@davidtw.co>
Prettier would do this anyway, but it’s separated out for a more
reviewable diff. Generated by ESLint.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
When you post to /json/users, we no longer
require or look at the short_name parameter,
since we don't use it in any meaningful way.
An upcoming commit will eliminate it from the
database.
This fixes up some complex helpers that may
have had some value before f-strings come along,
but they mostly obscured the logic for
people reading the tests.
We still keep really simple helpers for the
common cases, but there are no optional
parameters for them.
One goal of this fix is to remove the
short_name concept, and we just explicitly
set senders everywhere we need them.
We also now have each test just explicitly set
its reaction_type.
For cases where we have custom message ids
or senders, we just inline the simple call
to api_post.
We generally want to avoid having two sibling test
suites depend on each other, unless there's a real
compelling reason to share code. (And if there is
code to share, we can usually promote it to either
test_helpers or ZulipTestCase, as I did here.)
This commit is also prep for the next commit, where
I try to simplify all of the helpers in EmojiReactionBase.
Especially now that we have f-strings, it is usually
better to just call api_post explicitly than to
obscure the mechanism with thin wrappers around
api_post. Our url schemes are pretty stable, so it's
unlikely that the helpers are actually gonna prevent
future busywork.
It's not clear to me why these passed mypy
before, given this:
def assert_realm_values(f: Callable[[Realm], Any], ...
But this is clearly more accurate.
This issue isn't something a system administrator needs to take action
on -- it's a likely minor logic bug around organization
administrators moving topics between streams.
As a result, it shouldn't send error emails to administrators.
This is a hacky fix to avoid spoiler content leaking in emails. The
general idea here is to tell people to open Zulip to view the actual
message in full.
We create a mini-markdown parser here that strips away the fence content
that has the 'spoiler' tag for the text emails.
Our handling of html emails is much better in comparison where we can
use lxml to parse the spoiler blocks.
We include tests for the new implementation to avoid churning the
codebase too much so this can be easily reverted when we are able to
re-enable the feature.
The tests had a bunch of different ways to create
users; now we are consistent. (This is a bit of
a prep step, too, to allow us to easily clean
Hamlet's existing words before each test.)
We could certainly do better with the handling here, but using the raw
string that the user gave us is okayish for now.
Proper formatting of timestamps requires handling locales and timezones
of the receiver as well which is a larger project.
We now do something sensible for spoilers in notifications. A message
like:
```spoiler Luke's father is
Vader. Don't tell anyone else.
```
would be rendered as:
Luke's father is (...)
OpenAPISpec is our main class for accessing OpenAPI objects and
has the capability of creating the OpenAPI objects only once, thus
saving time. Since the openapi-core request validator object is going
to be accessed for considerable time during testing, hence add it to
the class for faster testing.
We extract OptionalContent and RequiredContent since some endpoints
require it and other don't.
In an ideal world, we'd have a better way to express these small
variants.
openapi-core, the request validator has a bug due to which data type
of path parameters is not checked. Hence `/users/{user_id}` can match
with `users/me`. So change the position of`/user/{user_id}` after all
such possible matches to avoid errors.
See https://github.com/p1c2u/openapi-core/issues/226 for details.
openapi-core, the request validator has a bug due to which data type
of path parameters is not checked. Hence `/messages/{message_id}`
can match with `messages/matches_narrow`. So change the position of
of `message/{message_id}` after all such possible matches to avoid
errors.
See https://github.com/p1c2u/openapi-core/issues/226 for details.
If the push_notification for the UserMessage is already active,
we don't send any push notification to the user. This may
happen due to race conditions.
Added and fixed test cases affected by this.
Previously, we rendered the twitter previews outside of a
spoiler block at the end of the message. The commit series
ending with this commit fixes that by inlining twitter
previews instead of appending them all at the end. As a
consequence of the inlining, we have fixed the issue here.
This commit just adds a test to assert that.
Fixes#15518.
This is similar to our behavior with image previews, and helps
reduce clutter in the final rendered html.
We add the string 'Tweet: ' to our existing tests so those tests
remain the same.
This commit makes our handling of twitter previews consistent with
how we handle our inline images so that tweets render next to the
paragraph that links to the tweet.
We decouple the logic of insertion rules for inline links from
image preview logic. Now, we can use this same logic for other
kinds of link previews as well.
We also remove the post_process_data option.
The sanity check is just overkill at this point,
since the mechanism to find streams is very
direct due to a recent commit.
Before this change we would only export streams
that had actual subscribers, which is usually
harmless, but it was mostly a relic of a one
time migration that we did when we were cleaning
up some dirty data in some of our very early
databases (circa 2016).
Now we work down the table hierarchy in a
more natural way:
- get Streams in Realm
- get Recipients matching above Streams
- get Subscriptions matching above Recipients
Note that for per-user exports, I kept the
same logic (users -> subscriptions -> recipients ->
streams) we had before.
One subtle detail here is that we make our
final Config blocks--which build the final
version of Recipient/Subscription--now hang
off of realm_config.
Fixes#15146.
This particular commit has been a long time coming. For reference,
!avatar(email) was an undocumented syntax that simply rendered an
inline 50px avatar for a user in a message, essentially allowing
you to create a user pill like:
`!avatar(alice@example.com) Alice: hey!`
---
Reimplementation
If we decide to reimplement this or a similar feature in the future,
we could use something like `<avatar:userid>` syntax which is more
in line with creating links in markdown. Even then, it would not be
a good idea to add this instead of supporting inline images directly.
Since any usecases of such a syntax are in automation, we do not need
to make it userfriendly and something like the following is a better
implementation that doesn't need a custom syntax:
`![avatar for Alice](/avatar/1234?s=50) Alice: hey!`
---
History
We initially added this syntax back in 2012 and it was 'deprecated'
from the get go. Here's what the original commit had to say about
the new syntax:
> We'll use this internally for the commit bot. We might eventually
> disable it for external users.
We eventually did start using this for our github integrations in 2013
but since then, those integrations have been neglected in favor of
our GitHub webhooks which do not use this syntax.
When we copied `!gravatar` to add the `!avatar` syntax, we also noted
that we want to deprecate the `!gravatar` syntax entirely - in 2013!
Since then, we haven't advertised either of these syntaxes anywhere
in our docs, and the only two places where this syntax remains is
our game bots that could easily do without these, and the git commit
integration that we have deprecated anyway.
We do not have any evidence of someone asking about this syntax on
chat.zulip.org when developing an integration and rightfully so- only
the people who work on Zulip (and specifically, markdown) are likely
to stumble upon it and try it out.
This is also the only peice of code due to which we had to look up
emails -> userid mapping in our backend markdown. By removing this,
we entirely remove the backend markdown's dependency on user emails
to render messages.
---
Relevant commits:
- Oct 2012, Initial commit c31462c278
- Nov 2013, Update commit bot 968c393826
- Nov 2013, Add avatar syntax 761c0a0266
- Sep 2017, Avoid email use c3032a7fe8
- Apr 2019, Remove from webhook 674fcfcce1
Log RealmAuditLog in do_set_realm_property and do_remove_realm_domain.
Tests for the changes are written in test_events because it will save
duplicate code for test_change_realm_property.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
This commit moves out the SoftDeactivationMessageTest out of
test_messages.py (which at the moment have mixed category of tests) into
a more logical file, test_soft_deactivation.py.
This commit moves TestMessageForIdsDisplayRecipientFetching class which
have tests regarding display_recipient filled in by MessageDict to
test_message_dict.py.
This commit moves InternalPrepTest test class to test_message_send.py
because it tests internal_send_* and internal_prep_* functions which are
used for internal message sending in zulip.
This commit moves few tests related to testing proper sending of private
messages from PrivateMessagesTest class in test_messages.py to a new class
in test_message_send.py.
The commit moves, test_create_mirror_user_despite_race which is not related
to message sending from MessagePOSTTest class in test_message_send.py to
test_mirror_users.py.
This commit extracts out MessagePOSTTest class from test_messages.py
intially.
In future commits other related message sending tests will be moved from
test_messages.py to test_message_send.py.
Starting with extracting out MirroredMessageUsersTests as it is related to
mirror users than anything message-specific.
In a future commit, may extract out some tests from MessagePOSTTest as well
but still deciding on those.
We had been using !time() syntax for timestamps so far. Since its
an unreleased feature, we can make changes without affecting many
people.
Fixes#15442.
Prior to this commit whenever convert was imported from zerver.lib.markdown
it was aliased as markdown_convert for readability.
This commit rename convert function to markdown_convert so that it can be
directly import it without aliasing and without compromising readability.
Without this change, pyflakes reports this exception:
pyflakes | zerver/worker/queue_processors.py:152:9 local variable 'e' is assigned to but never used
pyflakes | zerver/worker/queue_processors.py:155:81 undefined name 'e'
We use the EMAIL_TIMEOUT django setting to timeout after 15s of trying
to send an email. This will nicely lead to retries in the email_senders
queue, due to the retry_send_email_failures decorator.
smtlib documentation suggests that socket.timeout can be raised as the
result of timing out, so in attempts I'm getting
smtplib.SMTPServerDisconnected. Either way, seems appropriate to add
socket.timeout to the exception that we catch.
This fixes our triggering a RabbitMQ event to send a push notification
to remove the empty set of push notifications, resulting from not
using the correct data structure to determine which message IDs to look at.
This was causing a lot of visible exceptions when running
the `test_messages.py` test suite.
For users who are unsubscribed from the new stream but are in
the old stream, we delete the UserMessage.
We send the delete_message event only to guest users,
who have completely lost asses to the moved messages, for other
users we send the normal update_message event which moves
the messages to the new unsubed stream which
otherwise would look broken to the
user without reloading to the webpage.
Our previous OpenAPI schema validator that we implemented ourselves
was useful training wheels for our understanding OpenAPI properly, and
was mostly correct. But given that we've finally reached the point
where our OpenAPI file accurately describes the API, it makes sense to
switch to use an official OpenAPI validator. We lose some ability to
do exclude rules for particular elements, but those were primarily
important for us when we had a lot of them.
As part of this change, we need to add `additionalProperties: false`
for all of our dictonaries/objects where we've documented every
parameter; otherwise the OpenAPI schema checker won't know that we
expect every parameter to be documented.
This was hiding an actual type error in test_cache: a mismatch between
the object ID type, which is str, and the default id_fetcher, which
returns int.
Mypy’s insufficient support for default generic arguments basically
means we can’t use them without a lot of overloading, and there are
not enough callers here to justify that.
https://github.com/python/mypy/issues/3737
We avoid this being super messy where the code calls this by adding
some less generic wrappers for generic_bulk_cached_fetch.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We remove support for the old clients which required an event for
each message to clear notification.
This is justified since it has been around 1.5 years since we started
supporting the bulk operation (and so essentially nobody is using a
mobile app version so old that it doesn't support the batched
approach) and the unbatched approach has a maintenance and reliability
cost.
This commit changes the name of fixture that uses reference to bugdown.
Word backend in backend_markdown is important so to make it clear that
it is backend markdown. These test fixtures are also used in frontend,
so highlighting this is useful.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commit removes bugdown alias and do proper imports from markdown
module. Also remove bugdown word and replace it with markdown in
comments.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This changes the notification messages for events that currently just
include the string `"the repository"` to also include the full (`org/repo`)
name of the affected repository. Messages for the following events are
changed:
- `public`
- `star`
- `watch`
- `repository`
- `team_add`
Background: we're using the GitHub integration for org-wide notifications
for the [Bytecode Alliance Zulip](bytecodealliance.zulipchat.com/), and
having all messages just say "the repository" isn't ideal. Even now one
can hover over the link to see the repo's url, but it'd be much nicer if
the message just contained the full name.
I also changed the message for `star` to include a link to the repository,
same as the `watch` notification.
I also fix the code formatting so it's more
considerate of folks that have smaller monitors
or do side-by-side editing. And it's more
diff friendly as well.
I want to avoid creating the same scheme checkers
for multiple tests, and it's easiest to create the
schema checkers at module scope (and we possibly
even move them to another file eventually).
This will allow us to more easily instrument our
code to find duplicate schemas.
The goal here is to make test_events.py be
primarily focused on testing specific actions
and then validating:
- schemas
- apply_events logic
Then the new module here is a bit of a
kitchen sink of old tests, although it's
primarily focused on the actual mechanics
of the event system:
- logging
- register
- queue_ids
- fetching initial state
- client descriptors
The classes toward the bottom arguably
should go into more feature-specific
test modules, but the main goal now is
to purify test_events.py. (We may eventually
want to rename test_events.py to something
more like test_action_events.py, but the
current name has some doc references and
tribal knowledge around it.)
The current description of object and array parameters in
zulip.yaml is wrong and renders incorrect requests on using OpenAPI
tools such as SwaggerIO, etc. Fix it by encoding it correctly and
changing tests accordingly.
Fixes#14380.
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
This test is poorly written, in that it doesn't actually do any
verification of the results, but this at least is the correct
conversion of the data setup for it such that if we did verify its
results, the data we populated was relevant.
When we are just building lots of UserActivity
records to simulate user activity, it's misleading
to use some specific endpoint that people reading
the test may think is pertinent to the actual
test.
The pointer endpoints that I replaced in this
commit are a good example of this principle--
they will no longer exist in an upcoming commit,
but these tests would have kept running and be
even more confusing.
Rename rest of function names, classes and comments containing bugdoown
to markdown in test_markdown.py. Also change the refactored classes and
functions occurences in other files.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
rename ZEPHYR_MIRROR_BUGDOWN_KEY to ZEPHYR_MIRROR_MARKDOWN_KEY and
DEFAULT_BUGDOWN_KEY tp DEFAULT_MARKDOWN_KEY.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
Preparatory commit before removing bugdown alias for markdown. This
will prevent same variable name errors when name markdown is used
instead of bugdown.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commits changes class name of MarkdownListPreprocessor to
MarkdownListPreprocessor. It also changes corresponding references
in tests.
This is part of series of commits which aims for renaming bugdown to
markdown.
Rename the file and all the refrences to file and module test_bugdown.py
to test_markdown.py.
This commit is part of series of commit that renames bugdown to markdown.
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
In zulip.yaml, add `deprecated` tags to all parameters/keys with
`Deprecated` in the description. Then add tests to ensure that deprecated
parameters/keys will always have the `deprecated` key. Also, in
the API docs, sort the parameters according to presence of `deprecated`
key, presenting the `deprecated` keys at the end and add a `deprecated`
tag next to them.
Change variable `name` to `date_sent` as `name` actually stores
the date sent. Also change the data types of `name` and `create_time`
to integer. As they actually have empty decimal value.
Due to authentication restrictions, a deployment may need to direct
traffic for mobile applications to an alternate uri to take advantage
alternate authentication mechansism. By default the standard realm URI
will be usedm but if overridden in the settings file, an alternate uri
can be substituted.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
do_clear_mobile_push_notifications_for_ids can now be used to
clear push_notification for multiple users at once. This method
loops over users, so no performance optimization is gained.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
The restart event was always handled pretty similarly
to pointer, so I use restart events now for this
test (in preparation of eliminating pointer events).
This eval function performs the inverse of the implicit
stringification that’s implied by this type-incorrect assignment in
do_update_user_custom_profile_data_if_changed:
field_value.value = field['value']
We believe there’s sufficient validation for the data being passed to
this eval that it could only have been exploited by a PostgreSQL
administrator editing the database manually.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This fixes an issues that causes HTML entities inside of inline code
blocks to be converted rather than being displayed literally.
The upstream python-markdown now handles this correctly, so we just use
their implementation with our changes for removing .strip(). As a result
of this migration, we switch backtick pattern to an inline processor
too.
Fixes#12056.
For the codeblock counterpart of this issue, we should follow the
upstream PR https://github.com/Python-Markdown/markdown/pull/990.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
* Reordered the settings relevant without stream creation to the top.
* Removed useless/misleading defaults for optional parameters.
* Clarified description of the announce and authorization_errors_fatal settings.
* Clarified that `invite_only` only applies for stream creation.
(It's annoying to do so for its friends because they are including
common description content and OpenAPI doesn't have a way to have
extra content in a place you included something)
Fixes#14705.
Now we are consistent about validating color/description.
Ideally we wouldn't need to validate the
`streams_raw` parameters multiple times per
request, but the outer function here changes
the error messages to explicitly reference
the "delete" and "add" request variables.
And for the situation where the user-supplied
parameters are correct, the performance penalty
for checking them twice is extremely negligible.
So it's probably fine for now to just make sure
we use the same validators in all the relevant
places.
There's probably some deeper refactor that we
can do to eliminate the whole `compose_views`
scheme. And it's also not entirely clear to
me that we really need to support the update
endpoint. But that's all out of the scope of
this commit.
Note that I don't actually convert the
checker from check_dict to check_dict_only,
because that would be a user-facing change,
but I think we can sweep a lot of things
like this after the next release.
This avoids some code duplication as well
as adding some missing fields.
We also use check_dict_only to prevent
folks from adding new fields to the
relevant events without updating these
tests. (A bigger sweep comes later.)
As the code comment indicates, we just
use a strict check here rather than
pretending that the test exercises a
more complicated schema for the config
data, which is dynamic in nature.
Cleaning up config_data is outside the
scope of this PR; my main goal is to
eliminate check_dict calls (usually in favor
of check_dict_only).
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
The helper should be used instead of constructing the dict manually.
Change get_account_data_dict, on GitHubAuthBackendTest
class, so it has a third argument, user_avatar_url.
This is a preparation for support using GitHub avatar
upon user resgistration (when the user logs using
GitHub).
Update the REQ check for profile_data in
update_user_backend by tweaking `check_profile_data`
to use `check_dict_only`.
Here is the relevant URL:
path('users/<int:user_id>', rest_dispatch,
{'GET': 'zerver.views.users.get_members_backend',
It would be nice to unify the validator
for these two views, but they are different:
update_user_backend
update_user_custom_profile_data
It's not completely clear to me why update_user_backend
seems to support a superset of the functionality
of `update_user_custom_profile_data`, but it has
this code to allow you to remove custom profile fields:
clean_profile_data = []
for entry in profile_data:
assert isinstance(entry["id"], int)
if entry["value"] is None or not entry["value"]:
field_id = entry["id"]
check_remove_custom_profile_field_value(target, field_id)
else:
clean_profile_data.append({
"id": entry["id"],
"value": entry["value"],
})
Whereas the other view is much simpler:
def update_user_custom_profile_data(
<snip>
) -> HttpResponse:
validate_user_custom_profile_data(user_profile.realm.id, data)
do_update_user_custom_profile_data_if_changed(user_profile, data)
# We need to call this explicitly otherwise constraints are not check
return json_success()
This tightens our checking of user-supplied data
for this endpoint:
path('users/me/profile_data', rest_dispatch,
{'PATCH': 'zerver.views.custom_profile_fields.update_user_custom_profile_data',
...
We now explicitly require the `value` field
to be present in the dicts being passed in
here, as part of `REQ`. There is no reason
that our current clients would be sending
extra fields here, and we would just ignore
them anyway, so we also move to using
check_dict_only.
Here is some relevant webapp code (see settings_account.js):
fields.push({id: field.id, value: user_ids});
update_user_custom_profile_fields(fields, channel.patch);
settings_ui.do_settings_change(method, "/json/users/me/profile_data",
{data: JSON.stringify([field])}, spinner_element);
The webapp code sends fields one at a time
as one-element arrays, which is strange, but
that is out of the scope of this change.
After some discussion, everyone seems to agree that 3.0 is the more
appropriate version number for our next major release. This updates
our documentation to reflect that we'll be using 3.0 as our next major
release.
`/api/v1/fetch_api_key`'s response had a key `email` with the user's
delivery email. But its JSON counterpart `/json/fetch_api_key`, which
has a completely different implementation, did not return `email` in
its success response.
So to avoid confusion, the non-API endpoint, `/json/fetch_api_key`
response has been made identical with it's `/api` counterpart by
adding the `email` key. Also it is safe to send as the calling user
will only see their own email.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
This commit changes the PreregistrationUser.invite_as dict to have
same set of values as we have for UserProfile.role.
This also adds a data migration to update the already exisiting
PreregistrationUser and MultiuseInvite objects.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
To be more consistent with the meaning in the Stream model, and to make
it easier to have a reasonable settings API, we get rid of the None
value for Realm.message_retention_days in favor of the value -1 to
represent the "don't delete messages" default policy.
In 5200598a31, we introduced a new
client capability that can be used to avoid unreasonable network
bandwidth consumed sending avatar URLs of long term idle users in
organizations with 10,000s members.
This commit enables this feature and adds support for it to the web
client.
subdomain=None didn't make much sense as a value, and wasn't actually in
use anywhere, except one test where it was accidental. All tests specify
the subdomain explicitly, so we should change the type to str, and make
it an obligatory kwarg.
Adds the ability to set a SAML attribute which contains a
list of subdomains the user is allowed to access. This allows a Zulip
server with multiple organizations to filter using SAML attributes
which organization each user can access.
Cleaned up and adapted by Mateusz Mandera to fit our conventions and
needs more.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
Also, `send_message` example is altered to send a message to the
stream 'social' to avoid getting a "first_message_id: null"
in the response for `get_subscriptions` example, that caused
`validate_against_openapi_schema` to throw an error.