We added default ToS for the development environment a few months
back; as a side effect, we now need to accept ToS when going through
the development environment registration flow, including for our
one-click account creation buttons.
After a new user joins an active organization, it isn't obvious what
to do next; this change causes there to be recent unread messages in
the stream sidebar for the user to click on to get a feel for what's
happening in the organization and experiment with Zulip.
Fixes#6512.
This commit wraps up the major work that we held back when upgrading
py-markdown 2.6.11 to 3.0.1. Since we were making our custom changes
to the link syntax, at the time we stuck to using the old method of
parsing links. This lays the groundwork for further changes to our
link and image link handling, and brings us on par with upstream.
Also, we now better document the ways in which our link handling is
different from upstream.
Previously, the unread_msgs data structure accounting (used for both
the web and mobile apps to determine the "Unread mentions" count
displayed in the UI) did not include wildcard mentions at all.
We fix this by adding the logic required to include properly that
data, with tests. As discussed in #6040, it makes sense to include
muted streams and topics for the purpose of this calculation.
Fixes part of #6040.
Apparently, get_active_presence_idle_user_ids, which is carefully
optimized to only fetch data for users who might actually need
notification processing, was only considering PMs and direct mentions,
not wildcard mentions or alert words.
This caused some pretty weird failure modes when working on adding
support for broader mention notifications, because users who had one
of these types of notifications would be treated as never
presence-idle, which was just confusing.
This is part of adding support for notifications for wildcard mentions
and alert words; it's worth merging this as an early commit because
the consequence of not doing this are very difficult to debug.
Rather than continually resetting the contents of an existing event
queue, we allocate a new one for each subtest.
We also fix a rather confusing bundle of comments.
We add an ensure_users function and use it in tests which have
hard-coded user ids, to make it clear to which users the ids refer to
and have it verified.
This makes it easier to see what's happening
in these tests and to keep track of any renumberings of user ids due to
changes in how we populate the database.
Hopefully this does a better job of spurring people to action, and also
suggests a self-service fix if they don't (i.e. contacting the person that
invited them).
Add ability to search entire message history of all public streams at
once. It includes all subscibed, non subscribed public streams messages
and even historical public stream messages sent before user had joined
an organization or stream.
Fixes#8859.
Instead of having a hard-coded url, it seems better to replace it with
get_gravatar_url - which returns the correct url, without breaking if
the email/id of the example user changes.
One occasionally finds that a 1580 character string of SQL queries
might not most readably be presented on a single line.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Send the config_options for each supported incoming webhook bot along
with the initial state (not present in apply_events since this is
mostly just static data).
Without disturbing the flow of the existing code for configuring
embedded bots too much, we now use the config_options feature to
allow incoming webhook type bot to be configured via. the "/bots"
endpoint of the API.
This is a prep commit to allow us to validate user provided bot
config data using the same function for incoming webhook type
bots alongside embedded bots (as opposed to creating a new
function just for incoming webhook bots).
In integrations.py we have a class called Integration which we then usually
subclass and then use to define the meta-data for all of our integrations.
Now, we want to allow all of our bots, specifically incoming webhook bots,
to be configured (i.e. we should let the user provide BotConfigData).
For this we create a new instance member of the Integration class called
config_options which will be a list of tuples containing the displayable
integration name, the configuration key form of the integration name and
the validator that it's value is supposed to adhere to.
This was used as a helper to construct the final display_recipient when
fetching messages. With the new mechanism of constructing
display_recipient by fetching appropriate users/streams from the
database and cache, this shouldn't be needed anymore.
There is no need to fetch the entire Stream or UserProfile objects, as
only several fields are needed. We use Django's .values() method to only
get what's needed.
For UserProfiles, it means that we get from the queries are dictionaries
already in the display_recipient form (UserDisplayRecipient type) - so
we can remove the user_profile_to_display_recipient_dict function, as
there's no need for this UserProfile -> UserDisplayRecipient conversion
anymore.
Instead of having the rather unclear type Union[str,
List[UserDisplayRecipient]] where display_recipient of message dicts was
involved, we use DisplayRecipientT (renamed from DisplayRecipientCacheT
- since there wasn't much reason to have the word Cache in there), which
makes it clearer what is the actual nature of the objects and gets rid
of this pretty big type declaration.
Since the display_recipients dictionaries corresponding to users are
always dictionaries with keys email, full_name, short_name, id,
is_mirror_dummy - instead of using the overly general Dict[str, Any]
type, we can define a UserDisplayRecipient type,
using an appropriate TypedDict.
The type definitions are moved from display_recipient.py to types.py, so
that they can be imported in models.py.
Appropriate type adjustments are made in various places in the code
where we operate on display_recipients.
The user information in display_recipient in cached message_dicts
becomes outdated if the information is changed in any way.
In particular, since we don't have a way to find all the message
objects that might contain PMs after an organization toggles the
setting to hide user email addresses from other users, we had a
situation where client might see inaccurate cached data from before
the transition for a period of up to hours.
We address this by using our generic_bulk_cached_fetch toolchain to
ensure we always are fetching display_recipient data from the database
(and/or a special recipient_id -> display_recipient cache, which we
can flush easily).
Fixes#12818.
It's not clear to me how this is intended to work in Mattermost's
system in that they don't document this behavior, but some users have
`null` as their list of teams, and presumably are not meant to be
included in any team at all.
This restructures the API endpoints that we currently have implemented
more or less for exclusive use by the mobile and desktop apps (things
like checking what authentication methods are supported) to use a
system that can be effectively parsed by our test_openapi
documentation.
This brings us close to being able to eliminate
`buggy_documentation_endpoints` as a persistently nonempty list.
This add some regular expression manipulation hacks to make it
possible for us to validate the documentation for the presence
endpoint with a slightly more complex regular expression capture
group.
Previously, our OpenAPI documentation validation was failing for some
endpoints because it didn't account for the `in: path` type of
parameter, resulting in a mismatch between what was declared via REQ
and what was declared in the OpenAPI docs.
We fix this by excluding the path type parameters in both places from
what's considered by documentation using the `path_only` flag.
I doubt this is the correct long-term fix; in particular, I don't
think we're actually running the validators for these path-only
parameters. The examples that exist today are all IDs with validators
for being non-negative numbers, but longer-term I think we'll want to
do something different (possibly at the REQ layer, see the TODO).
Atlassian announced that it will no longer provide information about
comments along with their "issue" type event payloads for Jira. So
we must now update the Jira integration to appropriately respond to
"comment" type events (the reason why we didn't do this before was
that initially, the "comment" type event payloads didn't contain
sufficient information about their issues, but this payload has
since been improved).
Note: This commit does *not* remove support for the older "issue"
type event payloads where information about comments was included.
This way we can maintain compatibility with old self-hosted versions
of self hosted Jira (2016 and before).
Source:
https://developer.atlassian.com/cloud/jira/platform/change-notice-
removal-of-comments-from-issue-webhooks/
Fixes#13012
Instead of just mocking some fake events, we use the code
path that generates slow query events and publishes them
to SlowQueryWorker.
This test improvement would have got a recent potential regression
caught in code review.
This new test runs each generated curl example against the Zulip API,
checking whether it returns successfully without errors.
Significantly modified by tabbott for simplicity.
Our new curl example generation logic was broken, in that it hardcoded
localhost:9991 (without an HTTP method or anything) as the API URL.
It requires a bit of plumbing to make this possible.
This refactor extracts the code logic of checking if user can access
stream history into it's own function: can_access_stream_history
that takes in user_profile and stream. Then we make seperate function
can_access_stream_by_name that takes in stream_name and retrives stream
and pass it to can_access_stream_history. This will make it easily to later
add a function that does the same thing with stream ID.
This function was used only once in exclude_muting_conditions where
it returned stream name so the function to fetch stream id.
Since we exepect the narrow to also include stream id we refactor it to
return a stream object and since we use get_stream_by_narrow_operand_access_unchecked
we don't need to worry about handling cases where stream id is passed since
the function handles it.
This let's us clean up the linter that excludes the use of get_stream
and by adding the access_unchecked in the name we make it clear that
it should be used with caution.
Refactoring idea by Tim Abbott.
Makes it obvious and readable and as an added bonus take up less space.
Note, some types used Iterable instead of List that were change
to used List since the narrow_paramter converter return a List.
zerver/openapi/python_examples.py:105: error: Argument 1 to "get_user_presence" of "Client" has incompatible type "str"; expected "Dict[str, Any]"
zerver/openapi/python_examples.py:563: error: Argument 1 to "add_reaction" of "Client" has incompatible type "Dict[str, object]"; expected "Dict[str, str]"
zerver/openapi/python_examples.py:576: error: Argument 1 to "remove_reaction" of "Client" has incompatible type "Dict[str, object]"; expected "Dict[str, str]"
zerver/worker/queue_processors.py:587: error: Argument "client" to "extract_query_without_mention" has incompatible type "EmbeddedBotHandler"; expected "ExternalBotHandler"
These were only missed because mypy daemon mode requires us to set
`follow_imports = skip` for the `zulip` package.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The typing for generic_bulk_cached_fetch is complicated, and was
recorded incorrectly previously for the case where a cache_transformer
function is required. We fix this by adding the new CacheItemT, and
additionally add comments explaining what's going on with these types
for future reference.
Thanks to Mateusz Mandera for raising this issue.
Apparently, the filters written for the send_password_reset_email (and
some other management commands) didn't correctly consider the case of
deactivated users.
While some commands, like syncing LDAP data (which can include whether
a user should be deactivated) want to process all users, other
commands generally only want to interact with active users. We fix
this and add some tests.
It seems possible that attempting to export large organizations could
result in high resource consumption that justifies having a technician
manage the exports manually.
The original seems to be unmaintained
(johnsensible/django-sendfile#65). Notably, this fixes a bug in the
filename parameter, which perviously showed the Python 3 repr of a
byte string (johnsensible/django-sendfile#49).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
There’s an apparent contradiction between RFC 7230 §3.3.2
Content-Length:
“A server MUST NOT send a Content-Length header field in any response
with a status code of 1xx (Informational) or 204 (No Content).”
and RFC 7231 §4.3.7 OPTIONS:
“A server MUST generate a Content-Length field with a value of "0" if
no payload body is to be sent in the response.”
The only resolution within the existing language would be to disallow
all 204 responses to OPTIONS requests. However, I don’t think that
was the intention, so I submitted this erratum report:
https://www.rfc-editor.org/errata/eid5806
and updated the code accordingly.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Apparently, our edit-message events did not guarantee that the outer
wrapper dictionary, which is intended to be unique for each client,
was unique for every client (instead only ensuring it was unique for
each user).
This led to clients unexpectedly getting last_event_id validation
errors in this code path when a user had multiple connected clients,
because the linear ordering of event IDs within a given queue was
corrupted.
In fd2a63b049, we accidentally fixed
this issue with a different set of userdata events, without fixing the
edit-message event bug. This commit fixes the remaining issue.
The `users/me/subscriptions` endpoint accidentally started returning
subscriber information for each stream. This is convenient, but
unnecessarily costly for those clients which either don't need it
(most API apps) or already acquire this information via /register
(including Zulip's apps).
This change removes that data set from the default response. Clients
which had come to rely on it, or would like to rely on it in future,
may still access it via an additional documented API parameter.
Fixes#12917.
Use a decorator called openapi_test_function instead of the hard-coded
TEST_FUNCTIONS list for increased readability and maintainablity (at
the cost of performance).
I changed the class of the title in order to use the same styling as the
other similar pages (like `/accounts/go` or `/login`).
Changed the related test.
For the emails that are associated to an existing account in an
organisation, the avatars will be displayed in the email selection
page. This includes avatar data in what is passed to the page.
Added `avatar_urls` to the context in `test_templates.py`.
Apparently GitHub changed the email address for these; we need to
update our code accordingly.
One cannot receive emails on the username@users.noreply.github.com, so
if someone tries creating an account with this email address, that
person would not be able to verify the account.
It was allowing us to get away with wrong types on a few functions:
`check_send_typing_notification` and `send_notification_backend` can be
(and are) called with a list of `int` as `notification_to`, not just a
list of `str`.
The problem it was working around already had a better solution using
the dummy `type` argument. Use that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The previous iteration still had the failure mode of not actually
testing anything, because it didn't trigger the data export code path
(and in fact was getting an HTTP 401 authentication denied error).
This test was broken due to using an empty `RealmAuditLog`
table. We fix this by mocking the creation of an export,
thus creating an entry, similar to what we do in our other
tests.
Delete trailing newlines from all files, except
tools/ci/success-http-headers.txt and tools/setup/dev-motd, where they
are significant, and static/third, where we want to stay close to
upstream.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Apparently, our edit-message events did not guarantee that the outer
wrapper dictionary, which is intended to be unique for each client,
was unique for every client (instead only ensuring it was unique for
each user).
This led to clients unexpectedly getting last_event_id validation
errors in this code path when a user had multiple connected clients,
because the linear ordering of event IDs within a given queue was
corrupted.
Now that we can create cURL examples based on the OpenAPI
documentation. We can begin using simple one line tags in
the documentation instead of manually creating cURL examples.
Fixes part of #12878.
Now we can also include extra keyword arguments to specify
modifications in how the example code should be generated
in the generate_code_example template tag.
E.g. generate_code_example(curl, exclude=["param1", "param2"])
This commit extends api_code_examples.py to support automatically
generating cURL examples from the OpenAPI documentation. This way
work won't have to be repeated and we can also drastically reduce
the chance of introducing faulty cURL examples (via. an automated
test which can now be easily created).
This commit progress our efforts to reduce pending_endpoints
as well as to migrate away from templates/zerver/api/fixtures
and towards our OpenAPI documentation.
Similar to commit d62b75fc.
The current code looks like it's trying to redirect /integrations/doc/email
to /integrations when EMAIL_GATEWAY_PATTERN is not set.
I think it doesn't currently do this. The test for that pathway has a bug:
self.get_doc('integrations/doc-html/email', subdomain='zulip') needs a
leading slash, and putting the slash back in results in the test failing.
This redirection is not really desired behavior -- better is to
unconditionally show that the email integration exists, and just point the
user to https://zulip.readthedocs.io/en/latest/production/email-gateway.html
(this is done in a child commit).
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
The previous version of this commit (commit
e00d4be6d5, #12888) had to be reverted
(commit b86c5cc490) because it was
missing the `to_dict`/`from_dict` migration code.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our implementation requires at least 1 space after the
'#' not not break existing linkifiers like '#123', etc.
that generally follow the convention we show in linkifier
examples.
- [valid] : # Hello
- [valid] : # Hello
- [invalid]: #Hello
For the frontend, we have taken the code from v0.7.0 of
upstream marked and made minor changes to avoid having
to refactor a significant part of our marked code.
For the backend, we merely have to change the regex to
force require spaces after #, and add hashheader to our
list of blockparsers.
Fixes#11418.
This fixes two issues:
* The syntax check logic we had for zerver.tornado.autoreload would
end up clearing _reload_hooks if one of the files that had changed
was zerver.tornado.autoreload itself (because we'd had re-imported
the current module), which could be incredibly confusing when trying
to test the autoreload logic. It seems better to just not run the
syntax check for syntax errors in this file.
Similarly, because reloading event_queue.py would destroy the state
in the queues, we avoid that as well.
* We make sure to flush stdout after running and reload hooks, to make
sure their output reaches the user.