This makes it possible to send notifications to more than one app ID
from the same server: for example, the main Zulip mobile app and the
new Flutter-based app, which has a separate app ID for use through its
beta period so that it can be installed alongside the existing app.
Originally, this was how the notification emails worked, but that was changed
in 797a7ef97b, with this old behavior
available as an option.
The footer and from address of emails that are sent when this
setting is set to True are confusing, especially when more people
are involved in a stream and since we have changed the way we send
emails, it should be removed. It’s also not widely used.
Fixes#26609.
Adds typing notification constants to the response given by
`POST /register`. Until now, these were hardcoded by clients
based on the documentation for implementing typing notifications
in the main endpoint description for `api/set-typing-status`.
This change also reflects updating the web-app frontend code
to use the new constants from the register response.
Co-authored-by: Samuel Kabuya <samuel.mwangikabuya@kibo.school>
Co-authored-by: Wilhelmina Asante <wilhelmina.asante@kibo.school>
Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
This in-progress feature was started in 2018 and hasn't
been worked on much since. It's already in a broken state,
which makes it hard to iterate on the existing search bar
since it's hard to know how those changes will affect search
pills.
We do still want to add search pills eventually, and when
we work on that, we can refer to this diff to readd the
changes back.
This gives more flexibility on a server with multiple organizations and
SAML IdPs. Such a server can have some organizations handled by IdPs
with SLO set up, and some without it set up. In such a scenario, having
a generic True/False server-wide setting is insufficient and instead
being able to specify the IdPs/orgs for SLO is needed.
Closes#20084
This is the flow that this implements:
1. A logged-in user clicks "Logout".
2. If they didn't auth via SAML, just do normal logout. Otherwise:
3. Form a LogoutRequest and redirect the user to
https://idp.example.com/slo-endpoint?SAMLRequest=<LogoutRequest here>
4. The IdP validates the LogoutRequest, terminates its own user session
and redirects the user to
https://thezuliporg.example.com/complete/saml/?SAMLRequest=<LogoutResponse>
with the appropriate LogoutResponse. In case of failure, the
LogoutResponse is expected to express that.
5. Zulip validates the LogoutResponse and if the response is a success
response, it executes the regular Zulip logout and the full flow is
finished.
This implements the core of the rewrite described in:
For the backend data model for UserPresence to one that supports much
more efficient queries and is more correct around handling of multiple
clients. The main loss of functionality is that we no longer track
which Client sent presence data (so we will no longer be able to say
using UserPresence "the user was last online on their desktop 15
minutes ago, but was online with their phone 3 minutes ago"). If we
consider that information important for the occasional investigation
query, we have can construct that answer data via UserActivity
already. It's not worth making Presence much more expensive/complex
to support it.
For slim_presence clients, this sends the same data format we sent
before, albeit with less complexity involved in constructing it. Note
that we at present will always send both last_active_time and
last_connected_time; we may revisit that in the future.
This commit doesn't include the finalizing migration, which drops the
UserPresenceOld table.
The way to deploy is to start the backfill migration with the server
down and then start the server *without* the user_presence queue worker,
to let the migration finish without having new data interfering with it.
Once the migration is done, the queue worker can be started, leading to
the presence data catching up to the current state as the queue worker
goes over the queued up events and updating the UserPresence table.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
Zulip already has integrations for server-side Sentry integration;
however, it has historically used the Zulip-specific `blueslip`
library for monitoring browser-side errors. However, the latter sends
errors to email, as well optionally to an internal `#errors` stream.
While this is sufficient for low volumes of users, and useful in that
it does not rely on outside services, at higher volumes it is very
difficult to do any analysis or filtering of the errors. Client-side
errors are exceptionally noisy, with many false positives due to
browser extensions or similar, so determining real real errors from a
stream of un-grouped emails or messages in a stream is quite
difficult.
Add a client-side Javascript sentry integration. To provide useful
backtraces, this requires extending the pre-deploy hooks to upload the
source-maps to Sentry. Additional keys are added to the non-public
API of `page_params` to control the DSN, realm identifier, and sample
rates.
We're changing the ping interval from 50s to 60s, because that's what
the mobile apps have hardcoded currently, and backwards-compatibility
is more important there than the web app's previously hardcoded 50s.
For PRESENCE_PING_INTERVAL_SECS, the previous value hardcoded in both
clients was 140s, selected as "plenty of network/other latency more
than 2 x ACTIVE_PING_INTERVAL_MS". This is a pretty aggressive value;
even a single request being missed or 500ing can result in a user
appearing offline incorrectly. (There's a lag of up to one full ping
interval between when the other client checks in and when you check
in, and so we'll be at almost 2 ping intervals when you issue your
next request that might get an updated connection time from that
user).
To increase failure tolerance, we want to change the offline
threshhold from 2 x ACTIVE_PING_INTERVAL + 20s to 3 x
ACTIVE_PING_INTERVAL + 20s, aka 140s => 200s, to be more robust to
temporary failures causing us to display other users as offline.
Since the mobile apps currently have 140s and 60s hardcoded, it should
be safe to make this particular change; the mobile apps will just
remain more aggressive than the web app in marking users offline until
it uses the new API parameters.
The end result in that Zulip will be slightly less aggressive at
marking other users as offline if they go off the Internet. We will
likely be able to tune ACTIVE_PING_INTERVAL downwards once #16381 and
its follow-ups are completed, because it'll likely make these requests
much cheaper.
As of the previous commit, the server provides these parameters in
page_params. The defaults match the values hard-coded in the webapp so
far - so get rid of the hard-coded values in favor of taking them from
page_params.
This old 300s value was meaningfully used in 2 places:
1. In the do_change_user_settings presence_enabled codepath when turning
a user invisible. It doesn't matter there, 140s is just since the
point is to make clients see this user as offline. And 140s is the
threshold used by clients (see the presence.js constant).
2. For calculating whether to set "offline" "status" in
result["presence"]["aggregated"] in get_presence_backend. It's fine
for this to become 140s, since clients shouldn't be looking at the
status value anymore anyway and just do their calculation based on
the timestamps.
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
The `django-sendfile2` module unfortunately only supports a single
`SENDFILE` root path -- an invariant which subsequent commits need to
break. Especially as Zulip only runs with a single webserver, and
thus sendfile backend, the functionality is simple to inline.
It is worth noting that the following headers from the initial Django
response are _preserved_, if present, and sent unmodified to the
client; all other headers are overridden by those supplied by the
internal redirect[^1]:
- Content-Type
- Content-Disposition
- Accept-Ranges
- Set-Cookie
- Cache-Control
- Expires
As such, we explicitly unset the Content-type header to allow nginx to
set it from the static file, but set Content-Disposition and
Cache-Control as we want them to be.
[^1]: https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/
This breaks an import cycle that prevented django-stubs from inferring
types for django.conf.settings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit brings AzureAD config in line with other backends:
- SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET gets fetched in computed_settings.py
instead of default_settings, consistent with github/gitlab/etc.
- SOCIAL_AUTH_AZUREAD_OAUTH2_KEY gets fetched in default_settings via
get_secret(..., development_only=True) like other social backends, to
allow easier set up in dev environment, in the dev-secrets.conf file.
- The secret gets renamed from azure_oauth2_secret to
social_auth_azuread_oauth2_secret to have a consistent naming scheme with
other social backends and with the SOCIAL_AUTH_AZUREAD_OAUTH2_KEY
name. This is backwards-incompatible.
The instructions for setting it up are updated to fit how this is
currently done in AzureAD.
The presence of `auto_signup` in idp_settings_dict in the test case
test_social_auth_registration_auto_signup is incompatible with the
previous type annotation of SOCIAL_AUTH_OIDC_ENABLED_IDPS, where `bool`
is not allowed.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Now that we can assume Python 3.6+, we can use the
email.headerregistry module to replace hacky manual email address
parsing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We fixed the case when handling `JITSI_SERVER_URL` being `None`, but the
type annotation didn't get updated along with the fix
2f9d4f5a96
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
django-stubs dynamically collects the type annotation for us from the
settings, acknowledging mypy that `HOME_NOT_LOGGED_IN` is an
`Optional[str]`. Type narrowing with assertions does not play well with
the default value of the decorator, so we define the same setting
variable with a different name as `CUSTOM_HOME_NOT_LOGGED_IN` to bypass
this restriction.
Filed python/mypy#13087 to track this issue.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.
This will make it convenient to add a handful of organizations to the
beta of this feature during its first few weeks to try to catch bugs,
before we open it to everyone in Zulip Cloud.
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
We do s/TOS/TERMS_OF_SERVICE/ on the name, and while we're at it,
remove the assumed zerver/ namespace for the template, which isn't
correct -- Zulip Cloud related content should be in the corporate/
directory.
We had an incident where someone didn't notice for a week that they'd
accidentally enabled a 30-day message retention policy, and thus we
were unable to restore the deleted the content.
After some review of what other products do (E.g. Dropbox preserves
things in a restoreable state for 30 days) we're adjusting this
setting's default value to be substantially longer, to give more time
for users to notice their mistake and correct it before data is
irrevocably deleted.