This commit adds 'GET /user_groups/{user_group_id}/members'
endpoint to get members of a user group. "direct_member_only"
parameter can be passed as True to the endpoint to get only
direct members of the user group and not the members of
subgroup.
This commit adds 'GET /user_groups/{id}/members/{id}' endpoint to check
whether a user is member of a group.
This commit also adds for_read parameter to access_user_group_by_id,
which if passed as True will provide access to read user group even
if it a system group or if non-admin acting user is not part of the
group.
This commit changes the invite API to accept invitation
expiration time in minutes since we are going to add a
custom option in further commits which would allow a user
to set expiration time in minutes, hours and weeks as well.
This comment was _originally_ for the `default` memcached cache, back
when it was added all of the way back in 0a84d7ac62. 9e64750083
made it a lie, and edc718951c made it even more confusing when it
removed the `default` cache configuration block, leaving the wrong
comment next to the wrong cache configuration block.
Banish the comment.
This cache was added in da33b72848 to serve as a replacement for the
durable database cache, in development; the previous commit has
switched that to be the non-durable memcached backend.
The special-case for "in-memory" in development is mostly-unnecessary
in contrast to memcached -- `./tools/run-dev.py` flushes memcached on
every startup. This differs in behaviour slightly, in that if the
codepath is changed and `run-dev` restarts Django, the cache is not
cleared. This seems an unlikely occurrence, however, and the code
cleanup from its removal is worth it.
Failure to pull the default "zulip" value here can lead to
accidentally applying a `postgres_password` value which is unnecessary
and may never work.
For consistency, always skip password auth attempts for the "zulip"
user on localhost, even if the password is set. This mirrors the
behavior of `process_fts_updates`.
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.
The flow seems to have changed a bit since these instructions were last
updated. Also information on which scopes needs to be authorized was
missing, which takes a bit of effort to figure and thus should be
written out explicitly.
This will make it convenient to add a handful of organizations to the
beta of this feature during its first few weeks to try to catch bugs,
before we open it to everyone in Zulip Cloud.
A recent Postgres upstream release appears to have broken PGroonga.
While we wait for https://github.com/pgroonga/pgroonga/issues/203 to
be resolved, disable PGroonga in our automated tests so that Zulip
CI passes.
The new release adds the commit:
20ac22b96d
Which allows us to get rid of the entire ugly override that was needed
to do this commit's job in our code. What we do here in this commit:
* Use django-scim2 0.17.1
* Revert the relevant parts of f5a65846a8
* Adjust the expected error message in test_exception_details_not_revealed_to_client
since the message thrown by django-scim2 in this release is slightly
different.
We do not have to add anything to set EXPOSE_SCIM_EXCEPTIONS, since
django-scim2 uses False as the default, which is what we want - and we
have the aforementioned test verifying that indeed information doesn't
get revealed to the SCIM client.
It’s built in to Jinja2 as of 2.9. Fixes “DeprecationWarning: The
'autoescape' extension is deprecated and will be removed in Jinja
3.1. This is built in now.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We recently ran into a payload in production that didn't contain
an event type at all. A payload where we can't figure out the event
type is quite rare. Instead of letting these payloads run amok, we
should raise a more informative exception for such unusual payloads.
If we encounter too many of these, then we can choose to conduct a
deeper investigation on a case-by-case basis.
With some changes by Tim Abbott.
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
We do s/TOS/TERMS_OF_SERVICE/ on the name, and while we're at it,
remove the assumed zerver/ namespace for the template, which isn't
correct -- Zulip Cloud related content should be in the corporate/
directory.
Having wantMessagesSigned=True globally means that it's also applied by
python3-saml to regular authentication SAMLResponses - making it require
the response to be signed, which is an issue because a feasible
alternative way that some IdPs (e.g. AzureAD) take by default is to sign
specifically the assertions in the SAMLResponse. This is also secure,
and thus we generally want to accept it.
Without this, the setting of wantMessagesSigned=True globally
in 4105ccdb17 causes a
regression for deployments that have already set up SAML with providers
such as AzureAD, making Zulip stop accepting the SAMLResponses.
Testing that this new logic works is handled by
test_saml_idp_initiated_logout_invalid_signature, which verifies that a
LogoutRequest without signature will be rejected.
A confirmation link takes a user to the check_prereg_key_and_redirect
endpoint, before getting redirected to POST to /accounts/register/. The
problem was that validation was happening in the check_prereg_key_and_redirect
part and not in /accounts/register/ - meaning that one could submit an
expired confirmation key and be able to register.
We fix this by moving validation into /accouts/register/.
The warning was fixed in python-jose 3.3.0, which we pulled in with
commit 61e1e38a00 (#18705).
This reverts commit 1df725e6f1 (#18567).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We had an incident where someone didn't notice for a week that they'd
accidentally enabled a 30-day message retention policy, and thus we
were unable to restore the deleted the content.
After some review of what other products do (E.g. Dropbox preserves
things in a restoreable state for 30 days) we're adjusting this
setting's default value to be substantially longer, to give more time
for users to notice their mistake and correct it before data is
irrevocably deleted.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Using these tuples is clearly uglier than using classes for storing
these encoded stream. This can be built on further to implement the
various fiddly logic around handling these objects inside appropriate
class method.
This increases the possible maximum wait time to send exceptions
during shutdown. The larger value makes it possible to send larger
exceptions, and weather larger network hiccups, during shutdown. In
instances where a service is crash-looping, it is already not serving
requests reliably, and better ensuring those exceptions are captured
is of significant value.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
This more closely matches email_change_by_user and
password_reset_form_by_email limits; legitimate users are unlikely to
need to send more than 5 emails to themselves during a day.
Both `create_realm_by_ip` and `find_account_by_ip` send emails to
arbitrary email addresses, and as such can be used to spam users.
Lump their IP rate limits into the same bucket; most legitimate users
will likely not be using both of these endpoints at similar times.
The rate is set at 5 in 30 minutes, the more quickly-restrictive of
the two previous rates.
If realm is web_public, spectators can now view avatar of other
users.
There is a special exception we had to introduce in rest model to
allow `/avatar` type of urls for `anonymous` access, because they
don't have the /api/v1 prefix.
Fixes#19838.
As detailed in the comments, the default behavior is undesirable for us
because we can't really predict all possibilities of exceptions that may
be raised - and thus putting str(e) in the http response is potentially
insecure as it may leak some unexpected sensitive information that was
in the exception.
As a hypothetical example - KeyError resulting from some buggy
some_dict[secret_string] call would leak information. Though of course
we aim to never write code like that.
None of the existing custom profile field types have the value as an
integer like declared in many places - nor is it a string like currently
decalred in types.py. The correct type is Union[str, List[int]]. Rather
than tracking this in so many places throughout the codebase, we add a
new ProfileDataElementValue type and insert it where appropriate.
This new setting both serves as a guard to allow us to merge API
support for web public streams to main before we're ready for this
feature to be available on Zulip Cloud, and also long term will
protect self-hosted servers from accidentally enabling web-public
streams (which could be a scary possibility for the administrators of
a corporate Zulip server).
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
This better matches the title of the page and more generally our
conventions around naming /help/ articles. We include a redirect
because this is referenced from Welcome Bot messages, and we
definitely don't want those links to break.
AuthnContextClassRef tells the IdP what forms of authentication the user
should use on the IdP's server for us to be okay with it. I don't think
there's a reason for us to enforce anything here and it should be up to
the IdP's configuration to handle authentication how it wants.
The default AuthnContextClassRef only allows PasswordProtectedTransport,
causing the IdP to e.g. reject authentication with Yubikey in AzureAD
SAML - which can be confusing for folks setting up SAML and is just not
necessary.
The previous commit introduced logging of attempts for username+password
backends. For completeness, we should log, in the same format,
successful attempts via social auth backends.
These details are useful to log. This only makes sense for some auth
backends, namely email and ldap backends, because other backends are
"external" in the sense that they happen at some external provider's
server (Google, SAML IdP etc.) so the failure also happens there and we
don't get useful information about what happened.
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
This utilizes the generic `BaseNotes` we added for multipurpose
patching. With this migration as an example, we can further support
more types of notes to replace the monkey-patching approach we have used
throughout the codebase for type safety.
Till now, we've been forking django-auth-ldap at
https://github.com/zulip/django-auth-ldap to put the
LDAPReverseEmailSearch feature in it, hoping to get it merged
upstream in https://github.com/django-auth-ldap/django-auth-ldap/pull/150
The efforts to get it merged have stalled for now however and we don't
want to be on the fork forever, so this commit puts the email search
feature as a clumsy workaround inside our codebase and switches to using
the latest upstream release instead of the fork.
This fixes error found with django-stubs and it is a part of #18777.
Note that there are various remaining errors that need to be fixed in
upstream or elsewhere in our codebase.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.