Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
This in-progress feature was started in 2018 and hasn't
been worked on much since. It's already in a broken state,
which makes it hard to iterate on the existing search bar
since it's hard to know how those changes will affect search
pills.
We do still want to add search pills eventually, and when
we work on that, we can refer to this diff to readd the
changes back.
This gives more flexibility on a server with multiple organizations and
SAML IdPs. Such a server can have some organizations handled by IdPs
with SLO set up, and some without it set up. In such a scenario, having
a generic True/False server-wide setting is insufficient and instead
being able to specify the IdPs/orgs for SLO is needed.
Closes#20084
This is the flow that this implements:
1. A logged-in user clicks "Logout".
2. If they didn't auth via SAML, just do normal logout. Otherwise:
3. Form a LogoutRequest and redirect the user to
https://idp.example.com/slo-endpoint?SAMLRequest=<LogoutRequest here>
4. The IdP validates the LogoutRequest, terminates its own user session
and redirects the user to
https://thezuliporg.example.com/complete/saml/?SAMLRequest=<LogoutResponse>
with the appropriate LogoutResponse. In case of failure, the
LogoutResponse is expected to express that.
5. Zulip validates the LogoutResponse and if the response is a success
response, it executes the regular Zulip logout and the full flow is
finished.
This implements the core of the rewrite described in:
For the backend data model for UserPresence to one that supports much
more efficient queries and is more correct around handling of multiple
clients. The main loss of functionality is that we no longer track
which Client sent presence data (so we will no longer be able to say
using UserPresence "the user was last online on their desktop 15
minutes ago, but was online with their phone 3 minutes ago"). If we
consider that information important for the occasional investigation
query, we have can construct that answer data via UserActivity
already. It's not worth making Presence much more expensive/complex
to support it.
For slim_presence clients, this sends the same data format we sent
before, albeit with less complexity involved in constructing it. Note
that we at present will always send both last_active_time and
last_connected_time; we may revisit that in the future.
This commit doesn't include the finalizing migration, which drops the
UserPresenceOld table.
The way to deploy is to start the backfill migration with the server
down and then start the server *without* the user_presence queue worker,
to let the migration finish without having new data interfering with it.
Once the migration is done, the queue worker can be started, leading to
the presence data catching up to the current state as the queue worker
goes over the queued up events and updating the UserPresence table.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
Zulip already has integrations for server-side Sentry integration;
however, it has historically used the Zulip-specific `blueslip`
library for monitoring browser-side errors. However, the latter sends
errors to email, as well optionally to an internal `#errors` stream.
While this is sufficient for low volumes of users, and useful in that
it does not rely on outside services, at higher volumes it is very
difficult to do any analysis or filtering of the errors. Client-side
errors are exceptionally noisy, with many false positives due to
browser extensions or similar, so determining real real errors from a
stream of un-grouped emails or messages in a stream is quite
difficult.
Add a client-side Javascript sentry integration. To provide useful
backtraces, this requires extending the pre-deploy hooks to upload the
source-maps to Sentry. Additional keys are added to the non-public
API of `page_params` to control the DSN, realm identifier, and sample
rates.
We're changing the ping interval from 50s to 60s, because that's what
the mobile apps have hardcoded currently, and backwards-compatibility
is more important there than the web app's previously hardcoded 50s.
For PRESENCE_PING_INTERVAL_SECS, the previous value hardcoded in both
clients was 140s, selected as "plenty of network/other latency more
than 2 x ACTIVE_PING_INTERVAL_MS". This is a pretty aggressive value;
even a single request being missed or 500ing can result in a user
appearing offline incorrectly. (There's a lag of up to one full ping
interval between when the other client checks in and when you check
in, and so we'll be at almost 2 ping intervals when you issue your
next request that might get an updated connection time from that
user).
To increase failure tolerance, we want to change the offline
threshhold from 2 x ACTIVE_PING_INTERVAL + 20s to 3 x
ACTIVE_PING_INTERVAL + 20s, aka 140s => 200s, to be more robust to
temporary failures causing us to display other users as offline.
Since the mobile apps currently have 140s and 60s hardcoded, it should
be safe to make this particular change; the mobile apps will just
remain more aggressive than the web app in marking users offline until
it uses the new API parameters.
The end result in that Zulip will be slightly less aggressive at
marking other users as offline if they go off the Internet. We will
likely be able to tune ACTIVE_PING_INTERVAL downwards once #16381 and
its follow-ups are completed, because it'll likely make these requests
much cheaper.
As of the previous commit, the server provides these parameters in
page_params. The defaults match the values hard-coded in the webapp so
far - so get rid of the hard-coded values in favor of taking them from
page_params.
This old 300s value was meaningfully used in 2 places:
1. In the do_change_user_settings presence_enabled codepath when turning
a user invisible. It doesn't matter there, 140s is just since the
point is to make clients see this user as offline. And 140s is the
threshold used by clients (see the presence.js constant).
2. For calculating whether to set "offline" "status" in
result["presence"]["aggregated"] in get_presence_backend. It's fine
for this to become 140s, since clients shouldn't be looking at the
status value anymore anyway and just do their calculation based on
the timestamps.
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
The `django-sendfile2` module unfortunately only supports a single
`SENDFILE` root path -- an invariant which subsequent commits need to
break. Especially as Zulip only runs with a single webserver, and
thus sendfile backend, the functionality is simple to inline.
It is worth noting that the following headers from the initial Django
response are _preserved_, if present, and sent unmodified to the
client; all other headers are overridden by those supplied by the
internal redirect[^1]:
- Content-Type
- Content-Disposition
- Accept-Ranges
- Set-Cookie
- Cache-Control
- Expires
As such, we explicitly unset the Content-type header to allow nginx to
set it from the static file, but set Content-Disposition and
Cache-Control as we want them to be.
[^1]: https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/
This breaks an import cycle that prevented django-stubs from inferring
types for django.conf.settings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit brings AzureAD config in line with other backends:
- SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET gets fetched in computed_settings.py
instead of default_settings, consistent with github/gitlab/etc.
- SOCIAL_AUTH_AZUREAD_OAUTH2_KEY gets fetched in default_settings via
get_secret(..., development_only=True) like other social backends, to
allow easier set up in dev environment, in the dev-secrets.conf file.
- The secret gets renamed from azure_oauth2_secret to
social_auth_azuread_oauth2_secret to have a consistent naming scheme with
other social backends and with the SOCIAL_AUTH_AZUREAD_OAUTH2_KEY
name. This is backwards-incompatible.
The instructions for setting it up are updated to fit how this is
currently done in AzureAD.
The presence of `auto_signup` in idp_settings_dict in the test case
test_social_auth_registration_auto_signup is incompatible with the
previous type annotation of SOCIAL_AUTH_OIDC_ENABLED_IDPS, where `bool`
is not allowed.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Now that we can assume Python 3.6+, we can use the
email.headerregistry module to replace hacky manual email address
parsing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We fixed the case when handling `JITSI_SERVER_URL` being `None`, but the
type annotation didn't get updated along with the fix
2f9d4f5a96
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
django-stubs dynamically collects the type annotation for us from the
settings, acknowledging mypy that `HOME_NOT_LOGGED_IN` is an
`Optional[str]`. Type narrowing with assertions does not play well with
the default value of the decorator, so we define the same setting
variable with a different name as `CUSTOM_HOME_NOT_LOGGED_IN` to bypass
this restriction.
Filed python/mypy#13087 to track this issue.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.
This will make it convenient to add a handful of organizations to the
beta of this feature during its first few weeks to try to catch bugs,
before we open it to everyone in Zulip Cloud.
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
We do s/TOS/TERMS_OF_SERVICE/ on the name, and while we're at it,
remove the assumed zerver/ namespace for the template, which isn't
correct -- Zulip Cloud related content should be in the corporate/
directory.
We had an incident where someone didn't notice for a week that they'd
accidentally enabled a 30-day message retention policy, and thus we
were unable to restore the deleted the content.
After some review of what other products do (E.g. Dropbox preserves
things in a restoreable state for 30 days) we're adjusting this
setting's default value to be substantially longer, to give more time
for users to notice their mistake and correct it before data is
irrevocably deleted.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
This new setting both serves as a guard to allow us to merge API
support for web public streams to main before we're ready for this
feature to be available on Zulip Cloud, and also long term will
protect self-hosted servers from accidentally enabling web-public
streams (which could be a scary possibility for the administrators of
a corporate Zulip server).
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
This fixes error found with django-stubs and it is a part of #18777.
Note that there are various remaining errors that need to be fixed in
upstream or elsewhere in our codebase.