An explanatory note on the changes in zulip.yaml and
curl_param_value_generators is warranted here. In our automated
tests for our curl examples, the test for the API endpoint that
changes the posting permissions of a stream comes before our
existing curl test for adding message reactions.
Since there is an extra notification message due to the change in
posting permissions, the message IDs used in tests that come after
need to be incremented by 1.
This is a part of #20289.
It's slightly annoying to plumb Optional[MentionBackend]
down the stack, but it's a one-time change.
I tried to make the cache code relatively unobtrusive
for the single-message use case.
We should be able to eliminate redundant stream queries
using similar techniques.
I considered caching at the level of rendering the message
itself, but this involves nearly as much plumbing, and
you have to account for the fact that several users on
your realm may have distinct default languages (French,
Spanish, Russian, etc.), so you would not eliminate as
many query hops. Also, if multiple streams were involved,
users would get slightly different messages based on
their prior subscriptions.
The bug here probably didn't come up too much in
practice, but if we were adding a user to multiple
streams when they already had used all N available
colors, all the new streams would be assigned the same
color, since the size of used_colors would stay at N,
thwarting our little modulo-len hackery.
It's not a terrible bug, since users can obviously
customize their stream colors as they see fit.
Usually when we are adding a user to multiple streams,
the users are fairly new, and thus don't have many
existing streams, so I have never heard this bug
reported in the field.
Anyway, assigning the colors in bulk seems to make more
sense, and I added some tests.
For the situations where all the colors have already
been used, I didn't put a ton of thought into exactly
which repeated colors we want to choose; instead, I
just ensure they're different modulo 24. It's possible
that we should just have more than 24 canned colors, or
we should just assign the same default color every time
and let users change it themselves (once they've gone
beyond the 24, to be clear). Or maybe we can just do
something smarter here. I don't have enough time for a
deep dive on this issue.
This commit sets us up for the next commit, which will
save us a very expensive query.
If you are adding 15k users to a stream, and each user
has about 20 existing streams, then we need to retrieve
300k rows from the database to figure out which stream
colors they already have. We don't need all the extra
fields from Subscription, so now we get just the two
values we need for making a color map.
In the next commit we'll eliminate the other use case
for the big query, and I will explain in greater
depth how splitting out the color-picking code can
be a huge win. It is possible that some product decisions
could make this codepath easier. We could also do some
engineering specific to stream colors, such as caching
which colors users have already used.
This does cost us an extra round trip to the database.
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
Migrates the `/update-subscription-settings` api endpoint to the
`ignored_parameters_unsupported` model, which is also currently used
by `/update-settings` and `update-realm-user-settings-defaults`.
This change is a step towards preparing for an eventual migration to
have all endpoints return an `ignored_parameters_unsupported` block.
Previously the `/update-subscription-settings` endpoint returned a
copy of the data object sent in the request.
Fixes#15307.
It is confusing to have the plan type constants not be namespaced
by the thing they represent. We already have a namespacing
convention in place for constants, so we should use it for
Realm.plan_type as well.
This commit removes _test_user_settings_for_adding_streams
and its callers for testing public and private streams
because it uses excessive mocking and we also test the same
thing in _test_user_settings_for_creating_streams without
mocking, so this test doesn't add anything.
This commit adds can_create_web_public_streams helper
in models.py which will be used to validate whether
user is allowed to create a web-public stream or not.
This commit also adds the checks for Realm.POLICY_OWNERS_ONLY
in check_has_permission_policies.
This commit enforces invite_only argument to be named
in _test_user_settings_for_creating_streams. This will
help in improving readability especially when we will
add is_web_public argument in further commits.
Users wanted a feature where they could specify
which users can create public streams and which users can
create private streams.
This splits stream creation code into two parts,
public and private stream creation.
Fixes#17009.
We allow clients to make existing streams web public via the API.
This feature is still disabled via settings in production
environments, because we may have additional policy rules or UI
warnings we wish to add to this sort of conversion.
User can now create web public stream via the /subscribe API.
So, when a web public stream present in the API request does not
exist, it will be created now by specifying the is_web_public
parameter. The parameter would have been ignored without this
commit.
The new error message is more clear about why, "User cannot create
stream with this settings." was bad English, and in any case removing
an unnecessary string is always an improvement for translators.
This commit updates both the stream-level and realm-level message
retention setting to use 'unlimited' instead of 'forever' to set
message retention setting to "retain messages forever".
Sometime in the deep past, Zulip the GET /users/me/subscriptions
endpoint started returning subscribers. We noticed this and made it
optional via the include_subscribers parameter in
1af72a2745, however, we didn't notice
that they were being returned as emails rather than user IDs.
We migrated the core /register code paths to use subscriber IDs years
ago; this change completes that for the endpoints we forgot about.
The documentation allowed this error because we apparently had no
tests for this code path that used the actual API.
Since do_create_realm also creates general and core team streams,
we rename general to verona right after the realm is created. Mostly
because we dont really want two additional streams and this might
probably make it easy to review things.
There are puppeteer test changes because, we have a new "core team"
stream in tests as well as there is a new default notification stream
"Verona". Because of this tests in message-basics for example have
to be changed since the newly added core team affects the order in
which we navigate through the streams using arrow keys.
The extra await for selector was added in subscriptions test to make
the tests wait. Without the await the tests were passing ocassionally
and failing in some other times.
Fixes#6967
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
This is a prep commit in preparation of splitting
create_stream_policy into create_private_stream_policy
and create_public_stream_policy.
This extracts it in a way to make it possible to easily test
different stream policies in the upcoming stream policy split.
test_create_stream_policy_setting (in class StreamAdminTest) and
test_user_settings_for_creating_streams (in class SubscriptionAPITest)
test essentially the same thing.
So, remove one of them.
Removing test_create_stream_policy_setting makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can create streams.
test_invite_to_stream_by_invite_period_threshold (in class StreamAdminTest)
and test_user_settings_for_subscribing_other_users
(in class SubscriptionAPITest) test essentially the same thing.
So, remove one of them.
Removing test_invite_to_stream_by_invite_period_threshold makes sense,
since class StreamAdminTest tests things admins can do, whereas
non-admin users can invite other users.
This was used to test can_create_stream property of a guest user.
There are better ways to test it, which are already implemented in
test_can_create_streams.
This is will make it easier to systematically use Django's
`capturOnCommitCallbacks` in tests outside of the main
`test_events` file which involve assertions on events.
Cleaning up test_realm_domains.RealmDomainTest.test_list_realm_domains,
test_subs.StreamAdminTest.test_private_stream_live_updates,
test_subs.StreamAdminTest.test_realm_admin_can_update_unsub_private_stream
and test_subs.StreamAdminTest.test_non_admin_cannot_access_unsub_private_stream.
* Remove unnecessary json_validator for string parameters.
* Update frontend to pass right parameter.
Bump api feature level and highlight the fix for `emojiset`
parameter of `settings/display` endpoint in zulip.yaml file.
Fixes part of #18035.
Remove unnecessary json_validator for string parameters. This change
does not modify JavaScript because we don't have a frontend for these
API endpoints yet.
Fixes part of #18035.
get_active_subscriptions_for_stream_id should allow specifying whether
subscriptions of deactivated users should be included in the result.
Active subs of deactivated users are a subtlety that's easy to miss
when writing relevant code, so we make include_deactivated_users a
mandatory kwarg - this will force callers to definitely give thought to
whether such subs should be included or not.
This commit is just a refactoring, we keep original behavior everywhere
- there are places where subs of deactivates users should probably be
excluded but aren't - we don't fix that here, it'll be addressed in
follow-up commits.
A bug in the implementation of the can_forge_sender permission
(previously is_api_super_user) resulted in users with this permission
being able to send messages appearing as if sent by a system bots,
including to other organizations hosted by the same Zulip installation.
- The send message API had a bug allowing an api super user to
use forging to send messages to other realms' streams, as a
cross-realm bot. We fix this most directly by eliminating the
realm_str parameter - it is not necessary for any valid current use
case. The email gateway doesn't use this API despite the comment in
that block suggesting otherwise.
- The conditionals inside access_stream_for_send_message are changed up
to improve security. They were generally not ordered very well,
allowing the function to successfully return due to very weak
acceptance conditions - skipping the higher importance checks that
should lead to raising an error.
- The query count in test_subs is decreased because
access_stream_for_send_message returns earlier when doing its check
for a cross-realm bot sender - some subscription checking queries are
skipped.
- A linkifier test in test_message_dict needs to be changed. It didn't
make much sense in the first place, because it was creating a message
by a normal user, to a stream outside of the user's realm. That
shouldn't even be allowed.
We refactor check_has_permission_policies to check for all user roles for
each value of policy. This will help in handle a case where a guest is
allowed to do something but moderator isn't.
We need to do user_profile.refresh_from_db() in validation_func because
the realm object from user_profile is used in has_permission and we need
updated realm instance after changing the policy.
This is a follow-up commit to 9a4c58cb.
Messages sent by muted users are marked as read
as soon as they are sent (or, more accurately,
while creating the database entries itself), regardless
of type (stream/huddle/PM).
ede73ee4cd, makes it easy to
pass a list to `do_send_messages` containing user-ids for
whom the message should be marked as read.
We add the contents of this list to the set of muter IDs,
and then pass it on to `create_user_messages`.
This benefits from the caching behaviour of `get_muting_users`
and should not cause performance issues long term.
The consequence is that messages sent by muted users will
not contribute to unread counts and notifications.
This commit does not affect the unread messages
(if any) present just before muting, but only handles
subsequent messages. Old unreads will be handled in
further commits.
We keep the error message same for all cases when a user is not
allowed to subscribe others for all values of invite_to_stream_policy.
We raise error with different message for guest cases because it
is handled by decorators. We aim to change this behavior in future.
Explaining the details in error message isn't much important as
we do not show errors probably in API only, as we do not the show
the options itself in the frontend.
We keep the error message same for all cases when a user is not
allowed to create streams for all values of create_stream_policy.
We raise error with different message for guest cases because it
is handled by decorators. We aim to change this behavior in future.
Explaining the details in error message isn't much important as
we do not show errors probably in API only, as we do not the show
the options itself in the frontend.
The tests for can_create_streams and can_subscribe_other_users shares a
lot of code and we deduplicate the code by extracting most of the code
as check_has_permission_policies which will now be called by the two
tests test_can_create_streams and test_can_subscribe_other_users.
This will also help in avoiding the duplication of code when we will
convert more policies to use COMMON_POLICY_TYPES.
Adding an additional `!` to the stream name each time a stream is
deactivated, to a maximum of 21 times, effectively limits number of
times a stream with a given name can be deactivated. This is unlikely
to come up in common usage, but may be confusing when testing.
Change what we prepend to deactivated stream names to something with
more entropy than just `!`, by instead prepending a substring of hash
of the stream's ID. `!`s. Using 128 bits of the hash means that it
will require more than 10^18th renames to have a 1% chance of collision.
Because too-long stream names are also truncated at 60 characters,
having this entropy in the beginning of the name also helps address
potential issues from stream names that differed only in, e.g. the
60th character.
Fixes#17016.
This commit adds a new option of STREAM_POST_POLICY_MODERATORS
in stream_post_policy which will allow only realm admins and
moderators to post in that stream.
The moderators-only option was actually added in the previous
commit for create_stream_policy as we use the same function
'has_permission' for both the policies. But we add the error
handling code and tests for moderators-only option in this
commit.
This commit modifies the has_permission function to include
realm moderator role. Thus this adds a new option of moderators
only for create_stream_policy.
Though this automatically adds this option for invite_to_stream_policy
also, but we will keep other code for showing error and for tests
in a separate commit.
This commit changes the list_to_streams function to raise error
according to create_stream_policy value when a user cannot create
streams instead of same error for all cases.
This commit modifies test_user_settings_for_subscribing_other_users
to check all the possible cases including the cases when a user
can successfully subscribe other users along with the already
tested failure cases. This commit also adds checks for guest users
which was not present before.
This commit replaces the code which directly changes user.role,
realm.create_stream_policy and realm.waiting_period_threshold
with do_change_user_role and do_set_realm_property functions
in test_can_create_streams. This makes the code similar to the
other tests.
We refactor test_can_create_streams and test_can_subscribe_other_users
in test_subs.py. We want to follow a specific order in such tests
which is just set the policy value one by one and then checking
that the role in policy returns true and role just below that returns
false. This approach is explained in detail below.
Following hierarchy of roles is considered for these tests -
1. Realm admin
2. Full members
3. Members
4. Guests.
Then if the policy is set to admins only, we check that the having
role as admin returns true and the role just below that, i.e. full
member returns false. Similarly, if the policy is set to members
only, we check that a member should return true and role below it
which is guest should return false. We basically follow these as
we can assume that if a user with particular role cannot do the
required task, then user with role below in the hierarchy would
be not allowed to do the task too.
This commit refactors the above mentioned two tests to have above
explained workflow.
This commit removes the unnecessary do_change_user_role function
in test_can_subcribe_other_users. This was added in 1aebf3cab
which replaced the multiple functions like do_change_is_admin
and do_change_is_guest with do_change_user_role.
Previously two functions do_change_is_admin and do_change_is_guest
were used because there were two flags is_realm_admin and is_guest
which were used to determine the role of a user. But then we added
a single field role to UserProfile and removed the multiple flags
and thus also replaced the different functions with a single
do_change_user_role. With addition of a new field role, two
different do_change_* functions were not needed as we only have
a role field instead of different flags, but this was missed in
1aebf3cab and this commit fixes it.
This commit updates the stream creation, subscribing others to
stream, wildcard mention settings and stream post policy to allow
realm moderators even if they are new and the respective setting
is set to allow full members only.
This commit renames the is_new_member property in models.py
to is_provisional_member which will return true for any user
who is not a full member. We will add a condition in further
commit such that this returns 'False' for a moderator as we
will initially give all the rights to moderator that a full
member has.
Note that at this point, it's not possible to create moderator users;
this just will make it easier to write tests for logic involving them
as we develop the feature.
Adjustments made due to changes in Django 3.0:
(https://docs.djangoproject.com/en/3.0/releases/3.0/)
- test_signup: INTERNAL_RESET_URL_TOKEN was moved to
PasswordResetConfirmView.reset_url_token
- test_message_fetch:
"add_never_cache_headers() and never_cache() now add the private
directive to Cache-Control headers."
- "django.utils.html.escape() now uses html.escape() to escape HTML.
This converts ' to ' instead of the previous equivalent decimal
code '." - this requires adjusting the expected decimal code
in some of the string fixtures in tests.
We eliminate some redundant checks.
We also consistently provide a `subscribers` field
in our stream data with `[]`, even if our users
can't access subscribers. We therefore bump
the API version and tweak the docs. (See further
down for a detailed justification of the change.)
Even though it is sometimes fine to have redundant code
that is defensive in nature, some upcoming changes are gonna
move subscriber-related logic out of build_stream_dict_for_sub
for certain codepaths as part of our effort to streamline
the payload for subscribers within page_params.
So we can't rely on the code that I removed here
inside of build_stream_dict_for_sub.
Anyway, it makes more sense to do these checks explicitly
in the validate function.
The code in build_stream_dict_for_sub was almost effectively
a noop, since the validation function was already preventing
us from getting subscriber info. The only difference it
made was sometimes converting `[]` to `None`, and then
subsequently omitting the subscribers field.
Neither ZT nor the webapp make any distinction between
`[]` or <missing key> for the `subscribers` data in
`page_params`.
The webapp has had this code for a long time (and now
equivalent code elsewhere in this PR):
if (!Object.prototype.hasOwnProperty.call(sub, "subscribers")) {
sub.subscribers = new LazySet([]);
}
The webapp calculates access based on booleans, anyway:
sub.can_access_subscribers =
page_params.is_admin || sub.subscribed ||
(!page_params.is_guest && !sub.invite_only);
And ZT would choke if `subscribers` were missing, except that
it never gets to the relevant code due to other checks:
def get_other_subscribers_in_stream(<snip>):
assert stream_id is not None or stream_name is not None
if stream_id:
assert self.is_user_subscribed_to_stream(stream_id)
return [sub
for sub in self.stream_dict[stream_id]['subscribers']
if sub != self.user_id]
else:
return [sub
for _, stream in self.stream_dict.items()
for sub in stream['subscribers']
if stream['name'] == stream_name
if sub != self.user_id]
You could make a semantic argument that we should prefer
<missing key> to `[]` when subscribers aren't even available, but
we have precedent from the way that `bulk_get_subscriber_user_ids`
has traditionally populated its result:
result: Dict[int, List[int]] =
{stream["id"]: [] for stream in stream_dicts}
If we changed `stream_dicts` to `target_stream_dicts` we
would faciliate a move toward `None`, but it would just cause
headaches for other server code as well as the frontends
(which, to reiterate, already prefer the empty array
for convenience).
In 709493cd75 (Feb 2017)
I added code to render_markdown that re-fetched the
sender of the message, to detect whether the message is
a bot.
It's better to just let the ORM fetch this. The
message object should already have sender.
The diff makes it look like we are saving round trips
to the database, which is true in some cases. For
the main message-send codepath, though, we are only
saving a trip to memcached, since the middleware
will have put our sender's user object into the
cache. The test_message_send test calls internally
to check_send_stream_message, so it was actually
hitting the database in render_markdown (prior to
my change).
Before this change we were clearing the cache on
every SQL usage.
The code to do this was added in February 2017
in 6db4879f9c.
Now we clear the cache just one time, but before
the action/request under test.
Tests that want to count queries with a warm
cache now specify keep_cache_warm=True. Those
tests were particularly flawed before this change.
In general, the old code both over-counted and
under-counted queries.
It under-counted SQL usage for requests that were
able to pull some data out of a warm cache before
they did any SQL. Typically this would have bypassed
the initial query to get UserProfile, so you
will see several off-by-one fixes.
The old code over-counted SQL usage to the extent
that it's a rather extreme assumption that during
an action itself, the entries that you put into
the cache will get thrown away. And that's essentially
what the prior code simulated.
Now, it's still bad if an action keeps hitting the
cache for no reason, but it's not as bad as hitting
the database. There doesn't appear to be any evidence
of us doing something silly like fetching the same
data from the cache in a loop, but there are
opportunities to prevent second or third round
trips to the cache for the same object, if we
can re-structure the code so that the same caller
doesn't have two callees get the same data.
Note that for invites, we have some cache hits
that are due to the nature of how we serialize
data to our queue processor--we generally just
serialize ids, and then re-fetch objects when
we pop them off the queue.
We no bulk up peer_add/peer_remove events by user if the
same user has subscribed to multiple streams (and just
that single user).
This mostly optimizes the new-user codepath, but the
algorithm is a bit more general in nature.
We now can send an implied matrix of user/stream tuples
for peer_add and peer_remove events.
The client code basically does this:
for stream_id in event['stream_ids']:
for user_id in event['user_ids']:
update_sub(stream_id, user_id)
We used to send individual events, which gets real
expensive when you are creating new streams. For
the case of copy-to-stream case, we should see
events go from U to 1, where U is the number of users
added.
Note that we don't yet fully optimize the potential
of this schema. For adding a new user with lots
of default streams, we still send S peer_add events.
And if you subscribe a bunch of users to a bunch of
private streams, we only go from U * S to S; we can't
optimize it down to one event easily.
All the fields of a stream's recipient object can
be inferred from the Stream, so we just make a local
object. Django will create a Message object without
checking that the child Recipient object has been
saved. If that behavior changes in some upgrade,
we should see some pretty obvious symptom, including
query counts changing.
Tweaked by tabbott to add a longer explanatory comment, and delete a
useless old comment.
This saves us a query for edge cases like when
you try to unsubscribe from a public stream
that you have already unsubscribed from.
But this is mostly to prep for upcoming
optimizations.
That class is an artifact of when Stream
didn't have recipient_id. Now it's simpler
to deal with stream subscriptions.
We also save a query during page load (and
other places where we get subscriber
info).
We already trust ids that are put on our queue
for deferred work. For example, see the code for
"mark_stream_messages_as_read_for_everyone"
We now pass stream_recipient_id when we queue
up work for do_mark_stream_messages_as_read.
This generally saves about 3 queries per
user when we unsubscribe them from a stream.
We get two speedups:
* The query to get existing subscribers only
gets the two fields we need. We no longer
need all the overhead of user_profile
and recipient data being returned in the
query.
* We avoid Django making extra hops to the
database to get user info.
We replace get_peer_user_ids_for_stream_change
with two bulk functions to get peers and/or
subscribers.
Note that we have three codepaths that care about
peers:
subscribing existing users:
we need to tell peers about new subscribers
we need to tell subscribed user about old subscribers
unsubscribing existing users:
we only need to tell peers who unsubscribed
subscribing new user:
we only need to tell peers about the new user
(right now we generate send_event
calls to tell the new user about existing
subscribers, but this is a waste
of effort that we will fix soon)
The two bulk functions are this:
bulk_get_subscriber_peer_info
bulk_get_peers
They have some overlap in the implementation,
but there are some nuanced differences that are
described in the comments.
Looking up peers/subscribers in bulk leads to some
nice optimizations.
We will save some memchached traffic if you are
subscribing to multiple public streams.
We will save a query in the remove-subscriber
case if you are only dealing with private streams.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
This is a more thorough test of adding multiple
streams for multiple users, including streams
that users have already subscribed to.
The extra queries here are due to the fact
that we call `principal_to_user_profile` in
a loop in the view. So that's an example
of O(N) overhead. We may be able to bulk-fetch
these users eventually.
This is a pure extraction, except that I remove a
redundant check that `len(principals) > 0`. Whenever
that value is false, then `new_subscriptions` will
only have one possible entry, which is the current
user, and we skip that in the loop.
We no longer do O(N) queries to get existing streams.
This is a somewhat contrived use case--generally, we
are not trying to re-subscribe a user to several
streams. Still, we want to avoid this.
This commit also makes `test_bulk_subscribe_many`
do more work, and the change to the test helped
me discover this bug.
If a user asks to be subscribed to a stream
that they are already subscribed to, then
that stream won't be in new_stream_user_ids,
and we won't need to send an event for it.
This change makes that happen more automatically.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
The query to finds and marks all unread UserMessages in the stream as read
can be quite expensive, so we'll move that work to the deferred_work
queue and split it into batches.
Fixes#15770.
The new Stream administrator role is allowed to manage a stream they
administer, including:
* Setting properties like name, description, privacy and post-policy.
* Removing subscribers
* Deactivating the stream
The access_stream_for_delete_or_update is modified and is used only
to get objects from database and further checks for administrative
rights is done by check_stream_access_for_delete_or_update.
We have also added a new exception class StreamAdministratorRequired.
This commit modifies the /streams endpoint so that the web-public
streams are included in the default list of streams that users
have access to.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe themselves to web public streams.
Modifies filter_stream_authorization so that web-public streams are
added in the list of authorized streams that a guest user can
subscribe.
This commit is part of PR #14638 that aims to allow guest users
to browse and subscribe to web-public streams.
In this commit, we grant guest users access to stream history,
send message and common stream data of web-public streams.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe to web-public streams.
This modification allows guest users to have access to web-public
streams subscribers, even if they aren't subscribed or never
subscribed to that stream.
This commit is part of PR #14638 that aims to allow guest users to
browser and subscribe to web-public streams.
Now, gather_subscriptions include web-public streams in the 3 sets
of streams that it returns, subscribed, unsubscribed and never
subscribed.
This is part of PR #14638 that aims to allow guest users to browse and
subscribe to web-public streams.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>