Also, fixed up the annotations for tornadoviews to better align with
how narrows was defined as `Iterable[Sequence[str]]` rather than
`List[Tuple[str, str]]`.
Had to add some "type: ignore" because the pattern used in match
doesn't affect the type returned. A fix for this issue has been pushed
to typeshed - https://github.com/python/typeshed/pull/244
These ones don't fix any bugs, because the mutable arg is never passed
outside of the callable or mutated. But it's good practice to not use
them in case those invariants are changed in the future.
[Substantially revised by tabbott]
This probably still has some bugs in it, but having mostly complete
annotations for models.py will help a lot for the annotations folks
are adding to other files.
In order to genericize use of Zulip outside companies,
all instances of coworkers have been changed to users.
NOTABLE EXCEPTION: When the Zulip instance is domain-
locked, the reference to coworkers remains. The reason
for this is twofold: first, the majority of Zulip instances
which require a particular domain will be locked to a
company, and second, the template variable for the domain
necessary should be added to the alert so it is clear
to the user what the domain needs to be for access.
Fixes: #861.
Just render the templates without the actual workflow to see if they
don't return a 500 error; this lets us catch various classes of
template bugs automatically.
Fixes#784.
Add two options to the `test-backend` script:
1. verbose
If given the `test-backend` script will give detailed output.
2. no-shallow
Default value is False. If given the `test-backend` script will
fail if it finds a template which is shallow tested.
This stub file allows us to annotate view functions using the actual
types present in the bodies of the functions, rather than everything
having the type REQ.
In function bulk_add_subscriptions, some variables were named
`stream_name` but their type is Stream, not a string. Rename
those variables to `stream`.
Like the recent change blocking JSON endpoints for deactivated users
and users in deactivated realms, this change is a hardening
improvement. Those users should be unable to get an active session
anyway, but if somehow one is leaked, this means they won't be able to
access any user data.
Previously, api_fetch_api_key would not give clear error messages if
password auth was disabled or the user's realm had been deactivated;
additionally, the account disabled error stopped triggering when we
moved the active account check into the auth decorators.
While in theory users should be unable to get a valid session in order
to access these endpoints in the first place, this provides an extra
layer of hardering to prevent a deactivated user with a session from
accessing data via the old-style JSON API.
The security model for deactivated users (and users in deactivated
realms) being unable to access the service is intended to work via two
mechanisms:
* All active user sessions are deleted, and all login code paths
(where a user could get a new session) check whether the user (or
realm) is inactive before authorizing the request, preventing the
user from accessing the website and AJAX endpoints.
* All API code paths (which don't require a session) check whether the
user (and realm) are active.
However, this security model was not implemented correctly. In
particular, the check for whether a user has an active account in the
login process was done inside the login form's validators, which meant
that authentication mechanisms that did not use the login form
(e.g. Google and REMOTE_USER auth) could succeed in granting a session
even with an inactive account. The Zulip homepage would still fail to
load because the code for / includes an API call to Tornado authorized
by the user's token that would fail, but this mechanism could allow an
inactive user to access realm data or users to access data in a
deactivated realm.
This fixes the issue by adding explicit checks for inactive users and
inactive realms in all authentication backends (even those that were
already protected by the login form validator).
Mirror dummy users are already inactive, so we can remove the explicit
code around mirror dummy users.
The following commits add a complete set of tests for Zulip's inactive
user and realm security model.
In a deactivated realm, webhooks would still successfully send
messages, since there was no check for whether the realm was active in
api_key_only_webhook_view.
Long ago, there was work on an experimental integration model where
every user in a realm would have administrative control over all bots,
with the goal of simplifying the process of setting up communally
administered bots for smaller teams. While that new model was never
fully implemented (and thus never setup as an option), an error in
that original implementation meant that the data on all bots in a
realm, including their API keys, was sent to the browsers of users via
the `realm_bots` variable in `page_params`. The data wasn't displayed
in the UI for non-admin users, but was available via e.g. the
javascript console.
This commit updates this behavior to only send sensitive bot data like
API keys to the owner of the bot (and realm admins).
We may in the future implement a model simplifying communally
administered integrations, but if we do that, those bots should be
limited in their capabilities (e.g. only able to send webhook
messages).
This bug has been present since Zulip was released as open source.
The old code for this lookup was unnecessarily complicated because we
were working around Guardian, where the `is_realm_admin` check was
extremely expensive.
Previously we relied on having two matching list of fields for the
get_active_user_dicts_in_realm, one in the actual code and the other
in the caching system. By unifying these lists to have a single
source, we eliminate a class of caching bugs we might otherwise
regularly introduce.
This results in a substantial performance improvement for all of
Zulip's backend templates.
Changes in templates:
- Change `block.super` to `super()`.
- Remove `load` tag because Jinja2 doesn't support it.
- Use `minified_js()|safe` instead of `{% minified_js %}`.
- Use `compressed_css()|safe` instead of `{% compressed_css %}`.
- `forloop.first` -> `loop.first`.
- Use `{{ csrf_input }}` instead of `{% csrf_token %}`.
- Use `{# ... #}` instead of `{% comment %}`.
- Use `url()` instead of `{% url %}`.
- Use `_()` instead of `{% trans %}` because in Jinja `trans` is a block tag.
- Use `{% trans %}` instead of `{% blocktrans %}`.
- Use `{% raw %}` instead of `{% verbatim %}`.
Changes in tools:
- Check for `trans` block in `check-templates` instead of `blocktrans`
Changes in backend:
- Create custom `render_to_response` function which takes `request` objects
instead of `RequestContext` object. There are two reasons to do this:
1. `RequestContext` is not compatible with Jinja2
2. `RequestContext` in `render_to_response` is deprecated.
- Add Jinja2 related support files in zproject/jinja2 directory. It
includes a custom backend and a template renderer, compressors for js
and css and Jinja2 environment handler.
- Enable `slugify` and `pluralize` filters in Jinja2 environment.
Fixes#620.
To avoid the potential for introducing regressions here, we carefully
pass a default to REQ or not based on how the existing webhook's
parsing code worked. In the longer term, we'll want to make the
behavior consistent.
This fixes an exception where client_id was never set in an error code
path. It shouldn't be needed, but I think this makes the code clearer
and this will help in debugging the actual problem.
Related to #753.
The error message for a test file that doesn't import properly was
previously pretty difficult to understand and it wasn't clear how to
debug the issue.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Since we don't have a stable way to get the Dropbox preview failure
image (and it was sorta a weird setup anyway), it seems best to just
remove the condition.
Previously we needed to use a specified password when activating a
formerly mirror dummy user, in order for that user to be able to
(re)set their password and login. Now that we have our own password
reset form, this is no longer required.
Previously, if a user had only authenticated via Google auth, they
would be unable to reset their password in order to set one (which is
needed to setup the mobile apps, for example).