To make it easier to check if there is user information to be used
in the error report emails, we create a user object inside report.
Now, to check if we have the user's full name, email, etc, we just
need to do report['user']['user_full_name'] rather than check
each information one by one, because if the value of one key in
the report is different than None, all the others will be as well.
Per the API documentation[1], the following codes all correspond to
HTTP 404:
- `34`: **Sorry, that page does not exist.** The specified resource
was not found.
- `144`: **No status found with that ID.** The requested Tweet ID is
not found (if it existed, it was probably deleted)
- `421`: **This Tweet is no longer available.** The Tweet cannot be
retrieved. This may be for a number of reasons.
- `422`: **This Tweet is no longer available because it violated the
Twitter Rules.** The Tweet is not available in the API.
Treat all of these identically.
[1] https://developer.twitter.com/en/docs/basics/response-codes
The S3 data export tool's upload code path uses this nice boto
callback feature for showing a progress bar, which is nice for the
management command. It's spammy/broken in production and the backend
tests, so we change percent_callback to be a parameter passed in so
that it can only be used in the contexts where it makes sense.
This reverts commit c3779338c6 (part
of #14638), which incorrectly depended on commits from the future,
with the effect of either halting the flow of entropic time in an
irresolvable temporal paradox, summoning extradimensional beings to
rain destruction on the galaxy, or failing CI.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit modifies the /streams endpoint so that the web-public
streams are included in the default list of streams that users
have access to.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe themselves to web public streams.
Modifies filter_stream_authorization so that web-public streams are
added in the list of authorized streams that a guest user can
subscribe.
This commit is part of PR #14638 that aims to allow guest users
to browse and subscribe to web-public streams.
In this commit, we grant guest users access to stream history,
send message and common stream data of web-public streams.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe to web-public streams.
This modification allows guest users to have access to web-public
streams subscribers, even if they aren't subscribed or never
subscribed to that stream.
This commit is part of PR #14638 that aims to allow guest users to
browser and subscribe to web-public streams.
Now, gather_subscriptions include web-public streams in the 3 sets
of streams that it returns, subscribed, unsubscribed and never
subscribed.
This is part of PR #14638 that aims to allow guest users to browse and
subscribe to web-public streams.
This change makes the flow more coherent by instead of checking,
in the last condition, if the user isn't authorized to access that
stream, check if they are, as it is done in the other checks. Only
if all the conditions are false, which means that the user doesn't
have access to that stream, the stream is added to the
unauthorized_streams list.
Also add a Draft object-to-dictionary conversion method.
The following commits will provide an API around this
model using which our clients can sync drafts across each
other (if they so wish too). As of making this commit, we
haven't finalized exactly how our clients will use this.
See https://chat.zulip.org/#narrow/stream/2-general/topic/drafts
For some of the discussion around this model and in general,
around this feature.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Unlike the other Python datetime to Unix timestamp conversion
function (`datetime_to_timestamp`), `datetime_to_precise_timestamp`
won't drop the microseconds.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Edit the function `validate_against_openapi_schema` and add some
helper functions to allow for validation of documented events.
Also add OpenAPI response validation in `verify_action` as it is
called in a large number of `/events` tests.
The parameter Stream.date_created is now sent down to the clients
for both:
- client.get_streams()
- client.list_subscriptions()
API docs updated for stream and subscriptions.
Fixes#15410
We also have the caller pass in the property name for an
additional sanity check.
Note that we don't yet handle the possibility of extra_data;
that will be a subsequent commit.
Also, the stream_id fields aren't in Realm.property_types,
so we specify their types in the checker.
This a pretty big commit, but I really wanted it
to be atomic.
All realm_user/update events look the same from
the top:
_check_realm_user_update = check_events_dict(
required_keys=[
("type", equals("realm_user")),
("op", equals("update")),
("person", _check_realm_user_person),
]
)
And then we have a bunch of fields for person that
are optional, and we usually only send user_id plus
one other field, with the exception of avatar-related
events:
_check_realm_user_person = check_dict_only(
required_keys=[
# vertical formatting
("user_id", check_int),
],
optional_keys=[
("avatar_source", check_string),
("avatar_url", check_none_or(check_string)),
("avatar_url_medium", check_none_or(check_string)),
("avatar_version", check_int),
("bot_owner_id", check_int),
("custom_profile_field", _check_custom_profile_field),
("delivery_email", check_string),
("full_name", check_string),
("role", check_int_in(UserProfile.ROLE_TYPES)),
("email", check_string),
("user_id", check_int),
("timezone", check_string),
],
)
I would start the code review by just skimming the changes
to event_schema.py, to get the big picture of the complexity
here. Basically the schema is just the combined superset of
all the individual schemas that we remove from test_events.
Then I would read test_events.py.
The simplest diffs are basically of this form:
- schema_checker = check_events_dict([
- ('type', equals('realm_user')),
- ('op', equals('update')),
- ('person', check_dict_only([
- ('role', check_int_in(UserProfile.ROLE_TYPES)),
- ('user_id', check_int),
- ])),
- ])
# ...
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
Instead of a custom schema checker, we use the "superset"
schema checker, but then we pass in the set of fields that we
expect to be there. Note that 'user_id' is always there.
So most of the heavy lifting happens in this new function
in event_schema.py:
def check_realm_user_update(
var_name: str, event: Dict[str, Any], optional_fields: Set[str],
) -> None:
_check_realm_user_update(var_name, event)
keys = set(event["person"].keys()) - {"user_id"}
assert optional_fields == keys
But we still do some more custom checks in test_events.py.
custom profile fields: check keys of custom_profile_field
def test_custom_profile_field_data_events(self) -> None:
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value", "rendered_value"}
+ )
+ check_realm_user_update('events[0]', events[0], {"custom_profile_field"})
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value"}
+ )
avatar fields: check more specific types, since the superset
schema has check_none_or(check_string)
def test_change_avatar_fields(self) -> None:
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ assert isinstance(events[0]['person']['avatar_url'], str)
+ assert isinstance(events[0]['person']['avatar_url_medium'], str)
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ self.assertEqual(events[0]['person']['avatar_url'], None)
+ self.assertEqual(events[0]['person']['avatar_url_medium'], None)
Also note that avatar_fields is a set of four fields that
are set in event_schema.
full name: no extra work!
def test_change_full_name(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'full_name'})
test_change_user_delivery_email_email_address_visibilty_admins:
no extra work for delivery_email
check avatar fields more directly
roles (several examples) -- actually check the specific role
def test_change_realm_authentication_methods(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
+ self.assertEqual(events[0]['person']['role'], role)
bot_owner_id: no extra work!
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
timezone: no extra work!
- timezone_schema_checker('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"email", "timezone"})
Obviously, this file will soon grow--this
was the easiest way to start without introducing
noise into other commits.
It will soon be structurally similar
to frontend_tests/node_tests/lib/events.js--I
have some ideas there. But this should also
help for things like API docs.
This change makes our handling of youtube-url previews consistent
with how we handle our inline images. This allows the previews to
render next to the paragraph that links to the youtube video.
Follow-up to PR #15773.
This commit rewrites the way addresses are collected. If
the header with the address is not an AddressHeader (for instance,
Delivered-To and Envelope-To), we take its string representation.
Fixes: #15864 ("Error in email_mirror - _UnstructuredHeader has no attribute addresses").
This commit re-adds the integration for canarytokens.org, now separate
from the primary Thinkst integration.
Signed-off-by: David Wood <david@davidtw.co>
This commit fixes the Thinkst Canary integration which - based on the
schema in upstream documentation - incorrectly assumed that some fields
would always be sent, which meant that the integration would fail. In
addition, this commit adjusts support for canarytokens to only support
the canarytoken schema with Thinkst Canaries (not Thinkst's
canarytokens.org).
Signed-off-by: David Wood <david@davidtw.co>
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
We generally want to avoid having two sibling test
suites depend on each other, unless there's a real
compelling reason to share code. (And if there is
code to share, we can usually promote it to either
test_helpers or ZulipTestCase, as I did here.)
This commit is also prep for the next commit, where
I try to simplify all of the helpers in EmojiReactionBase.
Especially now that we have f-strings, it is usually
better to just call api_post explicitly than to
obscure the mechanism with thin wrappers around
api_post. Our url schemes are pretty stable, so it's
unlikely that the helpers are actually gonna prevent
future busywork.
This issue isn't something a system administrator needs to take action
on -- it's a likely minor logic bug around organization
administrators moving topics between streams.
As a result, it shouldn't send error emails to administrators.
This is a hacky fix to avoid spoiler content leaking in emails. The
general idea here is to tell people to open Zulip to view the actual
message in full.
We create a mini-markdown parser here that strips away the fence content
that has the 'spoiler' tag for the text emails.
Our handling of html emails is much better in comparison where we can
use lxml to parse the spoiler blocks.
We include tests for the new implementation to avoid churning the
codebase too much so this can be easily reverted when we are able to
re-enable the feature.
We now do something sensible for spoilers in notifications. A message
like:
```spoiler Luke's father is
Vader. Don't tell anyone else.
```
would be rendered as:
Luke's father is (...)
If the push_notification for the UserMessage is already active,
we don't send any push notification to the user. This may
happen due to race conditions.
Added and fixed test cases affected by this.
This is similar to our behavior with image previews, and helps
reduce clutter in the final rendered html.
We add the string 'Tweet: ' to our existing tests so those tests
remain the same.
This commit makes our handling of twitter previews consistent with
how we handle our inline images so that tweets render next to the
paragraph that links to the tweet.
We decouple the logic of insertion rules for inline links from
image preview logic. Now, we can use this same logic for other
kinds of link previews as well.
We also remove the post_process_data option.
The sanity check is just overkill at this point,
since the mechanism to find streams is very
direct due to a recent commit.
Before this change we would only export streams
that had actual subscribers, which is usually
harmless, but it was mostly a relic of a one
time migration that we did when we were cleaning
up some dirty data in some of our very early
databases (circa 2016).
Now we work down the table hierarchy in a
more natural way:
- get Streams in Realm
- get Recipients matching above Streams
- get Subscriptions matching above Recipients
Note that for per-user exports, I kept the
same logic (users -> subscriptions -> recipients ->
streams) we had before.
One subtle detail here is that we make our
final Config blocks--which build the final
version of Recipient/Subscription--now hang
off of realm_config.
Fixes#15146.
This particular commit has been a long time coming. For reference,
!avatar(email) was an undocumented syntax that simply rendered an
inline 50px avatar for a user in a message, essentially allowing
you to create a user pill like:
`!avatar(alice@example.com) Alice: hey!`
---
Reimplementation
If we decide to reimplement this or a similar feature in the future,
we could use something like `<avatar:userid>` syntax which is more
in line with creating links in markdown. Even then, it would not be
a good idea to add this instead of supporting inline images directly.
Since any usecases of such a syntax are in automation, we do not need
to make it userfriendly and something like the following is a better
implementation that doesn't need a custom syntax:
`![avatar for Alice](/avatar/1234?s=50) Alice: hey!`
---
History
We initially added this syntax back in 2012 and it was 'deprecated'
from the get go. Here's what the original commit had to say about
the new syntax:
> We'll use this internally for the commit bot. We might eventually
> disable it for external users.
We eventually did start using this for our github integrations in 2013
but since then, those integrations have been neglected in favor of
our GitHub webhooks which do not use this syntax.
When we copied `!gravatar` to add the `!avatar` syntax, we also noted
that we want to deprecate the `!gravatar` syntax entirely - in 2013!
Since then, we haven't advertised either of these syntaxes anywhere
in our docs, and the only two places where this syntax remains is
our game bots that could easily do without these, and the git commit
integration that we have deprecated anyway.
We do not have any evidence of someone asking about this syntax on
chat.zulip.org when developing an integration and rightfully so- only
the people who work on Zulip (and specifically, markdown) are likely
to stumble upon it and try it out.
This is also the only peice of code due to which we had to look up
emails -> userid mapping in our backend markdown. By removing this,
we entirely remove the backend markdown's dependency on user emails
to render messages.
---
Relevant commits:
- Oct 2012, Initial commit c31462c278
- Nov 2013, Update commit bot 968c393826
- Nov 2013, Add avatar syntax 761c0a0266
- Sep 2017, Avoid email use c3032a7fe8
- Apr 2019, Remove from webhook 674fcfcce1
Log RealmAuditLog in do_set_realm_property and do_remove_realm_domain.
Tests for the changes are written in test_events because it will save
duplicate code for test_change_realm_property.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
We had been using !time() syntax for timestamps so far. Since its
an unreleased feature, we can make changes without affecting many
people.
Fixes#15442.
Prior to this commit whenever convert was imported from zerver.lib.markdown
it was aliased as markdown_convert for readability.
This commit rename convert function to markdown_convert so that it can be
directly import it without aliasing and without compromising readability.
This fixes our triggering a RabbitMQ event to send a push notification
to remove the empty set of push notifications, resulting from not
using the correct data structure to determine which message IDs to look at.
This was causing a lot of visible exceptions when running
the `test_messages.py` test suite.
For users who are unsubscribed from the new stream but are in
the old stream, we delete the UserMessage.
We send the delete_message event only to guest users,
who have completely lost asses to the moved messages, for other
users we send the normal update_message event which moves
the messages to the new unsubed stream which
otherwise would look broken to the
user without reloading to the webpage.
Our previous OpenAPI schema validator that we implemented ourselves
was useful training wheels for our understanding OpenAPI properly, and
was mostly correct. But given that we've finally reached the point
where our OpenAPI file accurately describes the API, it makes sense to
switch to use an official OpenAPI validator. We lose some ability to
do exclude rules for particular elements, but those were primarily
important for us when we had a lot of them.
As part of this change, we need to add `additionalProperties: false`
for all of our dictonaries/objects where we've documented every
parameter; otherwise the OpenAPI schema checker won't know that we
expect every parameter to be documented.
This was hiding an actual type error in test_cache: a mismatch between
the object ID type, which is str, and the default id_fetcher, which
returns int.
Mypy’s insufficient support for default generic arguments basically
means we can’t use them without a lot of overloading, and there are
not enough callers here to justify that.
https://github.com/python/mypy/issues/3737
We avoid this being super messy where the code calls this by adding
some less generic wrappers for generic_bulk_cached_fetch.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We remove support for the old clients which required an event for
each message to clear notification.
This is justified since it has been around 1.5 years since we started
supporting the bulk operation (and so essentially nobody is using a
mobile app version so old that it doesn't support the batched
approach) and the unbatched approach has a maintenance and reliability
cost.
This commit removes bugdown alias and do proper imports from markdown
module. Also remove bugdown word and replace it with markdown in
comments.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
rename ZEPHYR_MIRROR_BUGDOWN_KEY to ZEPHYR_MIRROR_MARKDOWN_KEY and
DEFAULT_BUGDOWN_KEY tp DEFAULT_MARKDOWN_KEY.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commits changes class name of MarkdownListPreprocessor to
MarkdownListPreprocessor. It also changes corresponding references
in tests.
This is part of series of commits which aims for renaming bugdown to
markdown.
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
In zulip.yaml, add `deprecated` tags to all parameters/keys with
`Deprecated` in the description. Then add tests to ensure that deprecated
parameters/keys will always have the `deprecated` key. Also, in
the API docs, sort the parameters according to presence of `deprecated`
key, presenting the `deprecated` keys at the end and add a `deprecated`
tag next to them.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
do_clear_mobile_push_notifications_for_ids can now be used to
clear push_notification for multiple users at once. This method
loops over users, so no performance optimization is gained.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
This fixes an issues that causes HTML entities inside of inline code
blocks to be converted rather than being displayed literally.
The upstream python-markdown now handles this correctly, so we just use
their implementation with our changes for removing .strip(). As a result
of this migration, we switch backtick pattern to an inline processor
too.
Fixes#12056.
For the codeblock counterpart of this issue, we should follow the
upstream PR https://github.com/Python-Markdown/markdown/pull/990.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
Note that I don't actually convert the
checker from check_dict to check_dict_only,
because that would be a user-facing change,
but I think we can sweep a lot of things
like this after the next release.
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
To be more consistent with the meaning in the Stream model, and to make
it easier to have a reasonable settings API, we get rid of the None
value for Realm.message_retention_days in favor of the value -1 to
represent the "don't delete messages" default policy.
A generator that yields values without receiving or returning them is
an Iterator. Although every Iterator happens to be iterable, Iterable
is a confusing annotation for generators because a generator is only
iterable once.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Also enable warn_unused_ignores. I think the fact that there are so
few of these is good evidence that it’s not a significant burden for
people fixing type errors.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We only use this in a few places, but they're really important places
for understanding the types in the codebase, and so it's worth having
a bit of expository documentation explaining how we use it.
(And I expect we'll add more with time).
With this implementation of the feature of the automatic theme
detection, we make the following changes in the backend, frontend and
documentation.
This replaces the previous night_mode boolean with an enum, with the
default value being to use the prefers-color-scheme feature of the
operating system to determine which theme to use.
Fixes: #14451.
Co-authored-by: @kPerikou <44238834+kPerikou@users.noreply.github.com>
We can now invite new users as realm owners. We restrict only
owners to invite new users as owners both for single invite
and multiuse invite link. Also, only owners can revoke or resend
owner invitations.
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Rename the validator to check_union, to conform
more to Python typing nomenclature.
And we rename one of the test helpers to the
simpler `check_types`. (The test helper
was using "variable" in the "var" sense.)
We assert that the post was successful, to give
more immediate feedback for tests that don't
bother to check the return value and may be
implicitly assuming this method just works in
all cases.
And we also make it more convenient for tests
that are happy-path tests--they don't have to
do the assertion themselves. (And they're still
free to do deeper checks on the json.)
We opt out with allow_fail=True. We probably want
a more direct API eventually for tests that are
clearly trying to test the failure path for
subscribing to streams.
It's possible that a couple tests here that I added
allow_fail=True to just have flawed data setup--
I don't have time to investigate all cases, but
hopefully they will at least stand out more.
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
Currently the API docs do not specify whether a given API parameter
is to be specified in `query` or in `path`. Edit the docs so as
to show the type of argument right beside argument name.
The loop I added here in 5b49839b08 was
ill-conceived. The critical issue was that despite its name,
do_clear_mobile_push_notifications_for_ids does not immediately clear
push notifications (Except in our test suite, where `send_event`
immediately calls into the queue worker code!).
Instead, it queues work to clear those push notifications. Which
means that the first user to declare bankruptcy with a large number of
unreads will fill the queue, and then this will just be an infinite
loop adding more work to the queue.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
We need this field to avoid O(N) database operations
while fetching realm user data for clients with
`user_avatar_url_field_optional` flag enabled.
Part of #15287.
With #14378, we regressed back to the state of that
prior to 7e0ea61b00.
We fix this by getting our avatar bucket on
object initialization, and use the appropriate means
of gathering the network location for the urls.
Fixes#14484.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Previously, we had implemented:
<span class="timestamp" data-timestamp="unix time">Original text</span>
The new syntax is:
<time timestamp="ISO 8601 string">Original text</time>
<span class="timestamp-error">Invalid time format: Original text</span>
Since python and JS interpretations of the ISO format are very
slightly different, we force both of them to drop milliseconds
and use 'Z' instead of '+00:00' to represent that the string is
in UTC. The resultant strings look like: 2011-04-11T10:20:30Z.
Fixes#15431.
Fixes#14498.
When a topic is moved to a different stream, the message may no
longer be reachable to guest user, if the user is not subscribed
to the new stream.
We used to send message update event to the client in these cases,
which seems to be confusing both to the client updating the message
and the server sending push_notifications for it.
Now, we delete the UserMessage entry for these messages for the
user and send a delete message event to the client; which makes
both push_notification and the event handling client think that
the message was deleted and hence no confusion in the code is
raised.
This makes the system store and track PushDeviceToken objects on
the local Zulip server when using the push notifications bouncer
and includes tests for this.
This is something we need to implement end-to-end encryption for
push notifications. We'll add the encryption key as an additional
property on the local PushDeviceToken object.
It also likely adds some value in the case that a server were to
switch between using the bouncer service and sending notifications
directly, though in practice that's unlikely to happen.
The most import change here is the one in maybe_send_to_registration
codepath, as the insufficient validation there could lead to fetching
an expired PreregistrationUser that was invited as an administrator
admin even years ago, leading to this registration ending up in the
new user being a realm administrator.
Combined with the buggy migration in
0198_preregistrationuser_invited_as.py, this led to users incorrectly
joining as organizations administrators by accident. But even without
that bug, this issue could have allowed a user who was invited as an
administrator but then had that invitation expire and then joined via
social authentication incorrectly join as an organization administrator.
The second change is in ConfirmationEmailWorker, where this wasn't a
security problem, but if the server was stopped for long enough, with
some invites to send out email for in the queue, then after starting it
up again, the queue worker would send out emails for invites that
had already expired.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
This adds support for a "spoiler" syntax in Zulip's markdown, which
can be used to hide content that one doesn't want to be immediately
visible without a click.
We use our own spoiler block syntax inspired by Zulip's existing quote
and math block markdown extensions, rather than requiring a token on
every line, as is present in some other markdown spoiler
implementations.
Fixes#5802.
Co-authored-by: Dylan Nugent <dylnuge@gmail.com>
This adds a new function `get_apns_badge_count()` to
fetch count value for a user push notification and
then sends that value with the APNs payload.
Once a message is read from the web app, the count is
decremented accordingly and a push notification with
`event: remove` is sent to the iOS clients.
Fixes#10271.
This line was effectively hardcoding a specific stream_post_policy,
overriding the value already present in the event, to no purpose.
(I believe it got here via cargo-culting induced by #13787.)
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
This likely fix a bug that can leak thousands of messages into the
invalid state where:
* user_message.flags.active_mobile_push_notification is True
* user_message.flags.read is True
which is intended to be impossible except during the transient process
between marking messages as read sending the "remove push
notifications" event.
The bug is that if a user who is declaring bankruptcy with 10,000s of
unreads ends up having the database query to mark all of those as read
take 60s, the Django/uwsgi request will time out and kill the process.
If the postgres transaction still completes, we'll end up with the
second half of this function never being run.
A safer ordering is to do the smaller queries first.
We do this in a loop for correctness in the unlikely event there are
more than 10,000 of these.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
Whenever we use API queries to mark messages as read we now increment
two new LoggingCount stats, messages_read::hour and
messages_read_interactions::hour.
We add an early return in do_increment_logging_stat function if there
are no changes (increment is 0), as an optimization to avoid
unnecessary database queries.
We also log messages_read_interactions::hour Logging stat
as the number of API queries to mark messages as read.
We don't include tests for the case where do_update_pointer is called
because do_update_pointer will most likely be removed from the
codebase in the near future.
This adds a powerful end-to-end test for Zulip's API documentation:
For every documented API endpoint (with a few declared exceptions that
we hope to remove), we verify that every API response received by our
extensive backend test suite matches the declared schema.
This is a critical step towards being able to have complete, high
quality API documentation.
Fixes#15340.