These logs were pretty spammy, and there have long been much better
ways to communicate to system administrators that the incoming email
gateway is great, including, most importantly, in the section of the
emails themselves that explains how replying works.
Previously only admins were allowed to move messages between streams
and admins are allowed to post in any stream irresepctive of stream
post policy, so there was no need to check for stream post policy.
But as we now allow other members to also move messages, we need
to check whether the user who is moving the message is allowed
to post to the target stream (i.e. stream to which the messages
are being moved) and thus we allow moving messages only if the
user is allowed to post in target stream.
b7b1ec0aeb made our checks of the response
format stronger, to enforce that the json translates to a valid dict.
However, old client code (zulip_botserver) was using "" as equivalent to
response_not_required - so we need to keep backward-compatibility to not
break things built on it.
Currently, moving messages between streams is an action limited to
organization administrators. A big part of the motivation for that
restriction was to prevent users from moving messages from a private
stream without shared history as a way to access messages they should
not have access to.
Organization administrators can already just make the stream have
shared history if they want to access its messages, but allowing
non-administrators to move messages between would have
introduced a security bug without this change.
This completes the effort to make it possible to use
bulk_access_message in contexts where there are more than a handful of
messages without creating performance issues.
This new function optimizes how we fetch subscriptions
for streams. Basically, it excludes most long-term-idle
users from the query.
With 8k users, of which all but 400 are long term idle,
this speeds up get_recipient_info from about 150ms
to 50ms.
Overall this change appears to save a factor of 2-3 in the backend
processing time for sending or editing a message in large, public
streams in chat.zulip.org (at 18K users today).
If the caller has already fetched the Stream or subscription details
for the user, those can be passed to has_message_access to avoid extra
database queries.
When the format of the response received from the outgoing webhook
server is invalid (unparsable json, or just wrong format that doesn't
translate into a dictionary etc.), a message with the error is sent to
the bot owner. We should include the actual payload to make reasonable
debugging possible.
In notify_bot_owner we have to move the `if response_content` block to
append the payload to the message whenever it was specified as an
argument to the function. It shouldn't be nested inside
`elif status_code` as before.
This makes it parallel with deliver_scheduled_messages, and clarifies
that it is not used for simply sending outgoing emails (e.g. the
`email_senders` queue).
This also renames the supervisor job to match.
The screenshot generating mechanism doesn't work for newrelic and
causes error because its configuration file doesn't exist. This
commit fixes the configuration and re-generate the screenshots.
Also link to it from the API documentation page,
other help pages, and the confirmation dialog for
muting a user.
With substantial edits by tabbott and alya.
A message containing wildcard mention when quoted (which
is turned into a silent mention) or message with silent
wildcard mention notifies the users by sending desktop,
sound, and missed message email notifications. This
is clearly a bug which is fixed by this commit.
Fixes: #18354.
Note that the documentation cannot fully use our macros, because
Uptime Robot requires an & of the end of the URL, because of how it
passes its payload.
Fixes#13854. Fixes#13939.
We combine the two loops into one, so that we
can check our flags before creating the
UserMessageList object.
And we lift a few calculations out of the loop.
For 8k users, with 95% long-term-idle, this was
about a 10x speedup for me. (~30ms -> 3ms)
Support for the timeouts, and tests for them, was added in
53a8b2ac87 -- though no code could have set them after 31597cf33e.
Add a 10-second default timeout. Observationally, p99 is just about
5s, with everything else being previously being destined to meet the
30s worker timeout; 10s provides a sizable buffer between them.
Fixes#17742.
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Leave the Intel build as the prominent default, since it will run on
both platforms. (I would have liked to detect the appropriate
platform, but Apple seems to have put significant effort into making
that impossible for anti-fingerprinting reasons, which is probably an
overall good.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Move `get_setup_webhook_message` to
`zerver/lib/webhooks/common.py` so multiple integrations can use this
rather than just those which import `zerver/lib/webhooks/git.py`. Also
added the documentation for this.
Django's default SMTP implementation can raise various exceptions
when trying to send an email. In order to allow Zulip calling code
to catch fewer exceptions to handle any cause of "email not
sent", we translate most of them into EmailNotDeliveredException.
The non-translated exceptions concern the connection with the
SMTP server. They were not merged with the rest to keep some
details about the nature of these.
Tests are implemented in the test_send_email.py module.
* Move the extended documentation of code blocks to a separate page.
* Merge "code playgrounds" documentation to be a section of that page.
* Document copy widget on code blocks.
* This commit changes how we refer to "```python" type syntax for code
blocks. Instead of being called a syntax highlighting label, this is
now referred to as a "language tag", since it serves both syntax
highlighting and playgrounds.
* Remap all the links.
* Advertise this new page in various places that previously did not have a link.
As discussed in the comment, this is a critical scalability
optimization for organizations with thousands of users.
With substantial comment updates by tabbott.
Linked the Help Center document in places like
- zulip.yaml (/events, /register/, realm/playgrounds,
/realm/playgrounds/{playground_id})
- /help/format-your-message-using-markdown (Linked to make
users reading the markdown code block style, aware of this
feature)
- /templates/settings/playground_settings_admin.hbs (Linked
as a reference to read more about playgrounds before
configuring one)
Also showcase the feature on /features and /for/open-source.
In Django 3.2 slugify strips trailing dashes and underscores:
0382ecfe02
sanitize_name doesn't so this difference should be documented like the
others.
Underscore character is already covered by \w, so _ in the regex is
redundant. Also the docstring is mildly incorrect - underscore already
is an allowed character by django's slugify (and always was) for the
aforementioned reason.
Now that we are passing source realm's id instead of string_id in
source realm selector, it makes sense to rename the "source_realm" field
to "source_realm_id".
In the source realm selector, when we select a realm from which we want
to import the data, we pass the source realm's string_id. The problem
with this approach is that the string_id can be an empty string. This
commit makes the source_realm pass the realm's id instead of string_id.
Now, the source_realm's value will either be an integer or "" (empty
string) when we don't want to import settings from any realm.
The function get_role_for_new_user was added to get role from the
invited_as value, as invited_as values were one of (1,2,3,4)
previously, but it was then changed to be the actual role value,
i.e. one of (100, 200, 400, 600), in 1f8f227444.
So, we can safely remove this function now and use invited_as value
directly and handle realm_creation case by using an if condition.
Requesting external images is a privacy risk, so route all external
images through Camo.
Tweaked by tabbott for better test coverage, more comments, and to fix
bugs.
As of now, editing a widget doesn't update the rendered content.
It's important to ensure that existing votes or options added later on
don't get deleted when rendered.
This seems more complex than it's worth.
For now, we just prevent edits to widgets.
This commit makes the UI clearer that editing widgets isn't allowed.
See also:
https://github.com/zulip/zulip/issues/14229https://github.com/zulip/zulip/issues/14799Fixes#17156
`ensure_basic_avatar_image` and `ensure_medium_avatar_image` are
essentially the same thing, except a size parameter.
So, refactor them into a single function.
This doesn't introduce any functional changes.
This ensures it is present for all requests; while that was already
essentially true via process_client being called from every standard
decorator, this allows middleware and other code to rely on this
having been set.
This commit modifies the user objects returned by 'GET /users',
'GET /users/me', 'GET /users/{user_id}' and 'GET /users/{email}'
endpoints to include role field.
We also include role field in the page_params['realm_users'] dict
and in the person object sent in (type="realm_user", op="add")
event.
This will help determine potentail timeout lengths, as well as serve
as a generally-useful log for locations which do not have Smokescreen
enabled.
In service of #17742.
This help mobile and terminal clients understand whether a server
restart changed API feature levels or not, which in turn determines
whether they will need to resynchronize their data.
Also add tests and documentation for this previously undocumented
event type.
Fixes: #18205.
Event of type restart could not be handled properly, because of
its special behavior. For handling this event in most natural way
we recursively call `do_events_register` when restart event is
recieved, based on custom error created for this event.
Testing: Second call to get_user_events due to recursive calling
of do_event_register, is expected to not contain the restart event.
So new test added in test_event_system.py are based on above behavior
of get_user_events.
Fixes: #15541.
Commit 9afde790c6 introduced a bug
concerning outgoing emails inside the development environment. These
emails are not supposed to use a real connection with a mail
server as the send_messages function is overwritten inside the
EmailLogBackEnd class.
The bug was happening inside the initialize_connection function that
was introduced in the above-mentioned commit. This function is used
to refresh the connection with an SMTP server that would have closed
it. As the socket used to communicate with the server is not
initialized inside the development environment this function was
wrongly trying to send no-op commands.
The fix just checks that the connection argument of the function is
an EmailLogBackEnd object before trying the no-op command.
Additionally as it is sometimes useful to be able to send outgoing
emails inside the development environment the get_forward_address
function is used to check if a real connection exists between Zulip
and the server. If it is the case, as EmailLogBackEnd is a subclass
of smtp.EmailBackend, the connection will be nicely refreshed.
This commit was tested manually by checking that the console prints
correctly that an email is sent to the user when it signs in inside
the development environment. It was also tested when a mail provider
is specified and the mails were correctly received.
This allows access to be more configurable than just setting one
attribute. This can be configured by setting the setting
AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL.
This change is made to comply with the corresponding views for
the API. The incrementor implementation in zulip_bots won't work
otherwise if send_message and send_reply return None as it needs
the message id.
This commit create a directory to store the mock message for nagios and
more will be added.
The json files in this directory will be used to config the screenshot
generating script for the documentations of non-webhook integrations.
This extends the /json/typing endpoint to also accept
stream_id and topic. With this change, the requests
sent to /json/typing should have these:
* `to`: a list set to
- recipients for a PM
- stream_id for a stream message
* `topic`, in case of stream message
along with `op`(start or stop).
On receiving a request with stream_id and topic, we send
typing events to clients with stream_typing_notifications set
to True for all users subscribed to that stream.
The comment explains in more detail, but this should help avoid cases
where a Zulip server accidentally avoids the nag by having upgraded to
a 2-year old Zulip version from a 3-year-old version 2 months ago.
Add a `--dry-run` flag to send_custom_email management command
in order to provide a mechanism to verify the emails of the recipients
and the text of the email being sent before actually sending them.
Add tests to:
- Check that no emails are actually sent when we are in the dry-run mode.
- Check if the emails are printed correctly when we are in the dry-run mode.
Fixes#17767
Previously the outgoing emails were sent over several SMTP
connections through the EmailSendingWorker; establishing a new
connection each time adds notable overhead.
Redefine EmailSendingWorker worker to be a LoopQueueProcessingWorker,
which allows it to handle batches of events. At the same time, persist
the connection across email sending, if possible.
The connection is initialized in the constructor of the worker
in order to keep the same connection throughout the whole process.
The concrete implementation of the consume_batch function is simply
processing each email one at a time until they have all been sent.
In order to reuse the previously implemented decorator to retry
sending failures a new method that meets the decorator's required
arguments is declared inside the EmailSendingWorker class. This
allows to retry the sending process of a particular email inside
the batch if the caught exception leaves this process retriable.
A second retry mechanism is used inside the initialize_connection
function to redo the opening of the connection until it works or
until three attempts failed. For this purpose the backoff module
has been added to the dependencies and a test has been added to
ensure that this retry mechanism works well.
The connection is closed when the stop method is called.
Fixes: #17672.
This was introduced in 8321bd3f92 to serve as a sort of drop-in
replacement for zerver.lib.queue.queue_json_publish, but its use has
been subsequently cut out (e.g. `9fcdb6c83ac5`).
Remote its last callsite.
It's better to just raise JsonableError here, as that makes this error
processed in the central place for this kind of thing in do_rest_call:
---------
except JsonableError as e:
response_message = e.msg
logging.info("Outhook trigger failed:", stack_info=True)
fail_with_message(event, response_message)
response_message = f"The outgoing webhook server attempted to send a message in Zulip, but that request resulted in the following error:\n> {e}"
notify_bot_owner(event, failure_message=response_message)
return None
----------
which does all the things that are supposed to happen -
fail_with_message, appropriate logging and notifying the bot owner.
Remove content edit keys if present in edit_history_event
when passing to update_messages_for_topic_edit.
Since content edit is only applied to the edited_message,
this shouldn't be part of the rest of the messages for which
topic was edited. This was a bug identified by
editing topic and content of a message at the same time
when more than 1 message is affected.
model__id syntax implies needing a JOIN on the model table to fetch the
id. That's usually redundant, because the first table in the query
simply has a 'model_id' column, so the id can be fetched directly.
Django is actually smart enough to not do those redundant joins, but we
should still avoid this misguided syntax.
The exceptions are ManytoMany fields and queries doing a backward
relationship lookup. If "streams" is a many-to-many relationship, then
streams_id is invalid - streams__id syntax is needed. If "y" is a
foreign fields from X to Y:
class X:
y = models.ForeignKey(Y)
then object x of class X has the field x.y_id, but y of class Y doesn't
have y.x_id. Thus Y queries need to be done like
Y.objects.filter(x__id__in=some_list)
There exists a logic bug (see #18236) which causes duplicate
usermessage rows to be inserted. Currently, this stops catch-up for
all users.
Catch and record the exception for each affected user, so we at least
make catch-up progress on other users.
This was a mostly harmless bug, since those users cannot have active
clients, but fixing it will improve performance in any Zulip
organization where the vast majority of users are deactivated.
This commit adds an API to `zproject/urls.py` to edit/update
the realm linkifier. Its helper function to update the
database is added in `zerver/lib/actions.py`.
`zulip.yaml` is documented accordingly as well, clearly
stating that this API updates one linkifier at a time.
The tests are added for the API and helper function which
updates the realm linkifier.
Fixes#10830.
The caller is supposed validate the stream and user realm match, but
since this is a security-sensitive function, we should have this
defensive code to protect against some validation bugs in the caller
leading to this being called incorrectly and returning True.
Fixes#17922.
These two places fetch subscriptions for the sake of getting user ids to
send events to. Clearly deactivated users should be excluded from that.
get_active_subscriptions_for_stream_id should allow specifying whether
subscriptions of deactivated users should be included in the result.
Active subs of deactivated users are a subtlety that's easy to miss
when writing relevant code, so we make include_deactivated_users a
mandatory kwarg - this will force callers to definitely give thought to
whether such subs should be included or not.
This commit is just a refactoring, we keep original behavior everywhere
- there are places where subs of deactivates users should probably be
excluded but aren't - we don't fix that here, it'll be addressed in
follow-up commits.
The bulk deletion codepath was using dicts instead of user ids in the
event, as opposed to the other codepath which was adjusted to pass just
user ids before. We make the bulk codepath consistent with the other
one. Due to the dict-type events happening in 3.*, we move the goal for
deleting the compat code in process_notification to 5.0.
This was dropped in commit 840cf4b885
(#15091), but commit 1432067959
(#17047) mistakenly reintroduced it.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.conf.urls.url is actually a deprecated alias of
django.urls.re_path, but we want path instead of re_path.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.encoding.smart_text is a deprecated alias of
django.utils.encoding.smart_str as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.http.is_safe_url is a deprecated alias of
django.utils.http.url_has_allowed_host_and_scheme as of Django 3.0,
and will be removed in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
django.utils.translation.ugettext is a deprecated alias of
django.utils.translation.gettext as of Django 3.0, and will be removed
in Django 4.0.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
I have added a documentation page for the GitHub Actions integration to
`/integrations/doc/github-actions` with a link to the Zulip GitHub
Actions repository.
Tweaked by tabbott to add cross-links with the main GitHub integration.
A bug in the implementation of the can_forge_sender permission
(previously is_api_super_user) resulted in users with this permission
being able to send messages appearing as if sent by a system bots,
including to other organizations hosted by the same Zulip installation.
- The send message API had a bug allowing an api super user to
use forging to send messages to other realms' streams, as a
cross-realm bot. We fix this most directly by eliminating the
realm_str parameter - it is not necessary for any valid current use
case. The email gateway doesn't use this API despite the comment in
that block suggesting otherwise.
- The conditionals inside access_stream_for_send_message are changed up
to improve security. They were generally not ordered very well,
allowing the function to successfully return due to very weak
acceptance conditions - skipping the higher importance checks that
should lead to raising an error.
- The query count in test_subs is decreased because
access_stream_for_send_message returns earlier when doing its check
for a cross-realm bot sender - some subscription checking queries are
skipped.
- A linkifier test in test_message_dict needs to be changed. It didn't
make much sense in the first place, because it was creating a message
by a normal user, to a stream outside of the user's realm. That
shouldn't even be allowed.
A bug in the implementation of replies to messages sent by outgoing
webhooks to private streams meant that an outgoing webhook bot could be
used to send messages to private streams that the user was not intended
to be able to send messages to.
Completely skipping stream access check in check_message whenever the
sender is an outgoing webhook bot is insecure, as it might allow someone
with access to the bot's API key to send arbitrary messages to all
streams in the organization. The check is only meant to be bypassed in
send_response_message, where the stream message is only being sent
because someone mentioned the bot in that stream (and thus the bot
posting there is the desired outcome). We get much better control over
what's going by passing an explicit argument to check_message when
skipping the access check is desirable.
Organization admins can use this setting to restrict the maximum
rating of GIFs that will be retrieved from GIPHY. Also, there
is option to disable GIPHY too.
Moves documentation about using zoom as video call provider
to /integrations. This documentation was earlier present
at /help/start-a-call and is moved as asked in issue #17588.
Moves documentation about using Big Blue Button as video call
provider to /integrations. This documentation was earlier
present at /help/start-a-call and is moved as asked in issue #17588.
Moves documentation about using jitsi as video call provider
to /integrations. This documentation was earlier present
at /help/start-a-call and is moved as asked in issue #17588.
We refactor check_has_permission_policies to check for all user roles for
each value of policy. This will help in handle a case where a guest is
allowed to do something but moderator isn't.
We need to do user_profile.refresh_from_db() in validation_func because
the realm object from user_profile is used in has_permission and we need
updated realm instance after changing the policy.
This is a follow-up commit to 9a4c58cb.
* This introduces a new event type `realm_linkifiers` and
a new key for the initial data fetch of the same name.
Newer clients will be expected to use these.
* Backwards compatibility is ensured by changing neither
the current event nor the /register key. The data which
these hold is the same as before, but internally, it is
generated by processing the `realm_linkifiers` data.
We send both the old and the new event types to clients
whenever the linkifiers are changed.
Older clients will simply ignore the new event type, and
vice versa.
* The `realm/filters:GET` endpoint (which returns tuples)
is currently used by none of the official Zulip clients.
This commit replaces it with `realm/linkifiers:GET` which
returns data in the new dictionary format.
TODO: Update the `get_realm_filters` method in the API
bindings, to hit this new URL instead of the old one.
* This also updates the webapp frontend to use the newer
events and keys.
This logic likely never ran due to a combination of bugs.
* Running `maybe_update_markdown_engines` unconditionally meant that
`if md_engine_key in md_engines` was likely always true.
* Introduced in 65838bb: DEFAULT_MARKDOWN_KEY could never be in
md_engines, so should we have ever reached that code path, we'd have
tried to rebuild all markdown engines every time.
And it also wasn't clearly helpful -- because we fetch all linkifiers
for a realm on every request anyway, we don't really save database
queries by doing a bulk fetch on startup, and doing so would likely
result in a material regression to Zulip's overall startup time that
we were creating markdown engines for large numbers of realms in bulk
during process startup.
When a user is muted, in the same request,
we mark any existing unreads from that user
as read.
This is done for all types of messages
(PM/huddle/stream) and regardless of whether
the user was mentioned in them.
This will not break the unread count logic
of the web frontend, because that algorithm
decides which messages to mark as read based
only on the pointer location and the whitespace
at the bottom, not on what messages have already
been marked as read.
Messages sent by muted users are marked as read
as soon as they are sent (or, more accurately,
while creating the database entries itself), regardless
of type (stream/huddle/PM).
ede73ee4cd, makes it easy to
pass a list to `do_send_messages` containing user-ids for
whom the message should be marked as read.
We add the contents of this list to the set of muter IDs,
and then pass it on to `create_user_messages`.
This benefits from the caching behaviour of `get_muting_users`
and should not cause performance issues long term.
The consequence is that messages sent by muted users will
not contribute to unread counts and notifications.
This commit does not affect the unread messages
(if any) present just before muting, but only handles
subsequent messages. Old unreads will be handled in
further commits.
This commit defines a new function `get_muting_users`
which will return a list of IDs of users who have muted
a given user.
Whenever someone mutes/unmutes a user, the cache will be
flushed, and subsequently when that user sends a message,
the cache will be populated with the list of people who
have muted them (maybe empty).
This data is a good candidate for caching because-
1. The function will later be called from the message send
codepath, and we try to minimize database queries there.
2. The entries will be pretty tiny.
3. The entries won't churn too much. An average user will
send messages much more frequently than get muted/unmuted,
and the first time penalty of hitting the db and populating
the cache should ideally get amortized by avoiding several
DB lookups on subsequent message sends.
The actual code to call this function will be written in
further commits.
This makes it so that RealmAuditLog entries are
created when a user mutes/unmutes someone.
We don't really need to store the time, but we
do so anyways, because the `event_time` field
is currently a non-nullable one in the `RealmAuditLog`
model, and making it nullable would risk allowing
not specifying the time in other more important
code which also creates `RealmAuditLog` entries.
This also fixes an incorrect test of successfully
unmuting with the API. Earlier it did not mock
the time in the `views/muting.py` code to return
`mute_time`.
Commit 4a3ad0d introduced some extra stream-level parameters
to the `realm` object. This commit extends that to add a
max_message_length paramter too in the same server_level.
Previously, you had to request the `stream` event type in order to get
the stream-level parameters; this was a bad design in part because the
`subscription` event type has similar data and is preferred by most
clients.
So we move these to the `realm` object. We also add the maximum topic
length, as an adjacent parameter.
While changing this, we also fix these to better match the names of
similar API parameters.
Previously, when unmuting a user, we used to make
two database fetches - one to verify that the user
is has been muted before, and one while actually
unmuting the user.
This reduces that to one, by passing around the
`MutedUser` object fetched in the first round.
Since the new function returns `Optional[MutedUser]`,
we need to use a hack for events tests, because
mypy does not yet use the type inferred from
`assert foo is not None` in nested functions like lambdas.
See python/mypy@8780d45507.
Instead of using internal functions for data setup,
we use the API so that these tests are more
end-to-end.
This commit also removes a now unnecessary
`if date_muted is None` check.
We keep the error message same for all cases when a user is not
allowed to create streams for all values of create_stream_policy.
We raise error with different message for guest cases because it
is handled by decorators. We aim to change this behavior in future.
Explaining the details in error message isn't much important as
we do not show errors probably in API only, as we do not the show
the options itself in the frontend.
This makes it much more clear that this feature does JSON encoding,
which previously was only indicated in the documentation.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The comments explain in some detail, but basically we were displaying
the types for booleans incorrectly, and the types for strings in a
somewhat confusing fashion. Fix this with comments explaining the logic.
Using JSON dumping also results in our showing strings inside
quotation marks in our examples, which seems net helpful.
Thanks to ArunSankarKs for finding where we needed to change the
codebase.
Fixes#18021.
This commit adds backend code for passing can_invite_others_to_realm
field to clients using the fetch_initial_state_data in the page_params
object.
Though this field is not used by webapp as of now, but will be used
to fix a bug of incorreclty showing the invite users option in
settings overlay in the next commit.
We add moderators and full members option to invite_to_realm_policy
by using COMMON_POLICY_TYPES and use can_invite_others_to_realm helper
added in previous commit. This commit only does the backend work,
frontend work will be done in separate commit.
This commit replaces invite_by_admins_policy, which was a bool field,
with a new enum field invite_by_realm_policy.
Though the final goal is to add moderators and full members option
using COMMON_POLICY_TYPES, but this will be done in a separate
commit to make this easy for review.
This commit implements a subtle optimization (described in more detail
in the comment) that can save a few hundred milliseconds in when the
sender sees that their message has sent when sending to very large
streams.
Fixes#17898.
The tests for can_create_streams and can_subscribe_other_users shares a
lot of code and we deduplicate the code by extracting most of the code
as check_has_permission_policies which will now be called by the two
tests test_can_create_streams and test_can_subscribe_other_users.
This will also help in avoiding the duplication of code when we will
convert more policies to use COMMON_POLICY_TYPES.
We send the whole data set as a part of the event rather than
doing an add/remove operation for couple of reasons:
* This would make the client logic simpler.
* The playground data is small enough for us to not worry
about performance.
Tweaked both `fetch_initial_state_data` and `apply_events` to
handle the new playground event.
Tests added to validate the event matches the expected schema.
Documented realm_playgrounds sections inside /events and
/register to support our openapi validation system in test_events.
Tweaked other tests like test_event_system.py and test_home.py
to account for the new event being generated.
Lastly, documented the changes to the API endpoints in
api/changelog.md and bumped API_FEATURE_LEVEL.
Tweaked by tabbott to add an `id` field in RealmPlayground objects
sent to clients, which is essential to sending the API request to
remove one.
Similar to the previous commit, we have added a `do_*` function
which does the deletion from the DB. The next commit handles sending
the events when both adding and deleting a playground entry.
Added the openAPI format data to zulip.yaml for DELETE
/realm/playgrounds/{playground_id}. Also added python and curl
examples to remove-playground.md.
Tests added.
This endpoint will allow clients to create a playground entry
containing the name, pygments language and url_prefix for the
playground of their choice.
Introduced the `do_*` function in-charge of creating the entry in
the model. Handling the process of sending events which will be
done in a follow up commit.
Added the openAPI format data to zulip.yaml for POST
/realm/playgrounds. Also added python and curl examples for using
the endpoint in its markdown documented (add-playground.md).
Tests added.
Tweaked exports.py to add the config object there so that our export
tool can include the table when exporting. Also includes all the
changes required to import the new table from the exported data.
Helper function `get_realm_playgrounds` added to fetch all
playgrounds in a realm.
Tests amended.
Realm administrators already get creation and deletion events for all
streams, including private streams. So these should be reflected in
the initial state data.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adds backend code for the mute users feature.
This is just infrastructure work (database
interactions, helpers, tests, events, API docs
etc) and does not involve any behavioral/semantic
aspects of muted users.
Adds POST and DELETE endpoints, to keep the
URL scheme mostly consistent in terms of `users/me`.
TODOs:
1. Add tests for exporting `zulip_muteduser` database table.
2. Add dedicated methods to python-zulip-api to be used
in place of the current `client.call_endpoint` implementation.
We use GIPHY web SDK to create popover containing GIFs in a
grid format. Simply clicking on the GIFs will insert the GIF in the compose
box.
We add GIPHY logo to compose box action icons which opens the GIPHY
picker popover containing GIFs with "Powered by GIPHY"
attribution.
Previously, if a user subscribed to a stream with
history_public_to_subscribers, and then was looking at old messages in
the stream, they would not get live-updates for that stream, because
of the structure in how notify_reaction_update only looked at
UserMessage rows (we had a previous workaround involving the
`historical` field in `UserMessage` which had already made it work if
the user themselves added the reaction).
We fix this by including all subscribers with history access in the
set of recipients for update events.
Fixes a bug that was confused with #16942.
Amazon SES has a limit on the size of address fields, and rejects
emails with too-long "From" combinations of name and address. This
limit is set to 320 bytes and comes from an RFC limitation on the
size of addresses. This RFC standard states that an email address
should not be composed of a local part (before the '@') longer than
64 bytes and a domain part (after the '@') longer than 255 bytes.
It is possible that Amazon SES misinterprets this limitation as it
checks the length of the combination of the name and the email
address of the sender.
To ensure that this problem is not encountered in the send_email
module of Zulip the length of this combination is now checked
against this limit and the from_name field is removed to only
keep the from_address field when it is necessary in order to
stay below 320 bytes.
If the from_address field alone is longer than 320 bytes the
sending process will raise an SMTPDataError exception.
Tests for this new check are added to the backend test suite in
order to test if build_email correctly outputs an email with filled
from_name and from_address fields when the total length is lower
than 320 bytes and that it correctly throws the from_name field
away when necessary.
Fixes: #17558.
We're renaming "stream deletion" language to "stream archiving"
and these pages were moved in the process, so we should keep redirects
for them for a while.
This reverts commit 9c6d8d9d81 (#16916).
This feature has known bugs, and also wants some design changes to
make it customizable like linkifiers, so we’re retargeting this to
post-4.x.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adding an additional `!` to the stream name each time a stream is
deactivated, to a maximum of 21 times, effectively limits number of
times a stream with a given name can be deactivated. This is unlikely
to come up in common usage, but may be confusing when testing.
Change what we prepend to deactivated stream names to something with
more entropy than just `!`, by instead prepending a substring of hash
of the stream's ID. `!`s. Using 128 bits of the hash means that it
will require more than 10^18th renames to have a 1% chance of collision.
Because too-long stream names are also truncated at 60 characters,
having this entropy in the beginning of the name also helps address
potential issues from stream names that differed only in, e.g. the
60th character.
Fixes#17016.
Instead of validating `op` value later, this commit does that
in `REQ`.
Also helps avoiding duplication of this validation when
stream typing notifications feature is added.
The `widget_content` key is expected to contain a string which parses
as JSON; in the event that it does not, log the error and notify the
bot owner, instead of failing silently.
Fixes#16850.
In `validate_account_and_subdomain` we check
if user's realm is not deactivated. In case
of failure of this check, we raise our standard
JsonableError. While this works well in most
cases but it creates difficulties in handling
of users with deactivated realms for non-browser
clients.
So we register a new REALM_DEACTIVATED error
code so that clients can distinguish if error
is because of deactivated account. Following
these changes `validate_account_and_subdomain`
raises RealmDeactivatedError if user's realm
is deactivated.
This error is also documented in
`/api/rest-error-handling`.
Testing: I have mostly relied on automated
backend tests to test this.
Fixes#17763.
In validate_account_and_subdomain we check if
user's account is not deactivated. In case of
failure of this check we raise our standard
JsonableError. While this works well in most
cases but it creates difficulties in handling
of deactivated accounts for non-browser clients.
So we register a new USER_DEACTIVATED error
code so that clients can distinguish if error
is because of deactivated account. Following
these changes `validate_account_and_subdomain`
raises UserDeactivatedError if user's account
is deactivated.
This error is also documented in
`/api/rest-error-handling`.
Testing: I have mostly relied on automated
backend tests to test this.
Partially addresses issue #17763.
We add a TUTORIAL_ENABLED setting for self-hosters who want to
disable the tutorial entirely on their system. For this, the
default value (True) is placed in default_settings.py, which
can be overwritten by adding an entry in /etc/zulip/settings.py.
Updated database query to filter out deactivated streams from the
return of the get_topic_mutes method. Added optional
include_deactivated parameter to the method to make the behavior
default but overrideable. Added test case in test_muting for these
changes. Fixes blueslip warnings thrown by muting.js set_muted_topics
when passed deactivated streams via page_params.