Replace a separate call to subprocess, starting `node` from scratch,
with an optional standalone node Express service which performs the
rendering. In benchmarking, this reduces the overhead of a KaTeX call
from 120ms to 2.8ms. This is notable because enough calls to KaTeX in
a single message would previously time out the whole message
rendering.
The service is optional because he majority of deployments do not use
enough LaTeX to merit the additional memory usage (60Mb).
Fixes: #17425.
- More consistent export/import vs backup bullets at the top.
- Remove misleading documentation regarding the `zulip_org_id` reuse
problems. This documentation was written for Zulip 2.1.0 in
c6fe6cf0a4 and largely made obsolete
in d800ac33a0 (Zulip 5.0).
- Light editing for readability/crispness.
Fixes#28925.
- Makes "Deployment options" easier to navigate by splitting the
"Reverse proxies" and "System configuration" sections out into
dedicated pages.
Fixes#28928.
That specific piece of the instructions makes it sound like /auth/ is
surely supposed to be there in the URL. But newer versions of Keycloak
don't have it - so mention that explicitly, not to create a wrong
expectation.
The "nothing else" line is accurate at a high level but more ambigious
than I'd like for sensitive documentation -- we're not trying to make
an extreme claim that we've disabled all forms of short-term logging.
These metadata are essentially all publicily available anyway, and
making uploading them unconditional will simplify some things.
The documentation is not quite accurate in that it claims the server
will upload some metadata that is not actually uploaded yet (but will
by soon). This seems harmless.
The other option would be to run the cron job ourselves, but I feel
like different organizations with different policies might prefer very
different frequencies; daily/hourly, and it's not easy to make that
configurable with a cron file declared in puppet.
Fixes#27866.
The original behavior of this setting was to disable LDAP
authentication for any realms not configured to use it. This was an
arbitrary choice, and its only value was to potentially help catch
typos for users who are lazy about testing their configuration.
Since it makes it a very inconvenient to potentially host multiple
organizations with different LDAP configurations, remove that
behavior.
This makes it possible to send notifications to more than one app ID
from the same server: for example, the main Zulip mobile app and the
new Flutter-based app, which has a separate app ID for use through its
beta period so that it can be installed alongside the existing app.
This fixes the explanation of the setting's syntax to be more precise
(which doesn't mean "easily understandable" - because the setting is
a bit tricky) as well as an example to illustrate it.
nginx sets the value of the `$http_host` variable to the empty string
when using http/3, as there is technically no `Host:` header sent:
https://github.com/nginx-quic/nginx-quic/issues/3
Users with a browser that support http/3 will send their first request
to nginx with http/2, and get an expected HTTP 200 -- but any
subsequent requests will fail with am HTTP 400, since the browser will
have upgraded to http/3, which has an empty `Host` header, which Zulip
rejects.
Switch to the `$host` variable, which works for all HTTP versions.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Restore the default django.utils.log.AdminEmailHandler when
ERROR_REPORTING is enabled. Those with more sophisticated needs can
turn it off and use Sentry or a Sentry-compatible system.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
Users can, quite understandably, assume that upgrading Zulip upgraded
the underlying PostgreSQL version. Though it is mentioned at the top
of the page, mentioning it here clarifies that it is an additional
step.
- Updates instances of "private message", "PM", and "private_message",
excluding historical references in `overview/changelog.md`.
- Also excludes `/docs/translating` since we would need new
translations for "direct messages" and "DMs".
Previously, `X-Forwarded-Proto` did not need to be set, and failure to
set `loadbalancer.ips` would merely result in bad IP-address
rate-limiting and incorrect access logs; after 0935d388f0, however,
failure to do either of those, if Zulip is deployed with `http_only`,
will lead to infinite redirect loops after login. These are
accompanied by a misleading error, from Tornado, of:
Forbidden (Origin checking failed - https://zulip.example.com does not match any trusted origins.): /json/events
This is most common with Docker deployments, where deployments use
another docker container, such as nginx or Traefik, to do SSL
termination. See zulip/docker-zulip#403.
Update the documentation to reinforce that `loadbalancer.ips` also
controls trust of `X-Forwarded-Proto`, and that failure to set it will
cause the application to not function correctly.
04cf68b45e make nginx responsible for downloading (and caching)
files from S3. As noted in that commit, nginx implements its own
non-blocking DNS resolver, since the base syscall is blocking, so
requires an explicit nameserver configuration. That commit used
127.0.0.53, which is provided by systemd-resolved, as the resolver.
However, that service may not always be enabled and running, and may
in fact not even be installed (e.g. on Docker). Switch to parsing
`/etc/resolv.conf` and using the first-provided nameserver. In many
deployments, this will still be `127.0.0.53`, but for others it will
provide a working DNS server which is external to the host.
In the event that a server is misconfigured and has no resolvers in
`/etc/resolv.conf`, it will error out:
```console
Error: Evaluation Error: Error while evaluating a Function Call, No nameservers found in /etc/resolv.conf! Configure one by setting application_server.nameserver in /etc/zulip/zulip.conf (file: /home/zulip/deployments/current/puppet/zulip/manifests/app_frontend_base.pp, line: 76, column: 70) on node example.zulipdev.org
```
This is a useful improvement in general for making correct
LogoutRequests to Idps and a necessary one to make SP-initiated logout
fully work properly in the desktop application. During desktop auth
flow, the user goes through the browser, where they log in through their
IdP. This gives them a logged in browser session at the IdP. However,
SAML SP-initiated logout is fully conducted within the desktop
application. This means that proper information needs to be given to the
the IdP in the LogoutRequest to let it associate the LogoutRequest with
that logged in session that was established in the browser. SessionIndex
is exactly the tool for that in the SAML spec.
This gives more flexibility on a server with multiple organizations and
SAML IdPs. Such a server can have some organizations handled by IdPs
with SLO set up, and some without it set up. In such a scenario, having
a generic True/False server-wide setting is insufficient and instead
being able to specify the IdPs/orgs for SLO is needed.
Closes#20084
This is the flow that this implements:
1. A logged-in user clicks "Logout".
2. If they didn't auth via SAML, just do normal logout. Otherwise:
3. Form a LogoutRequest and redirect the user to
https://idp.example.com/slo-endpoint?SAMLRequest=<LogoutRequest here>
4. The IdP validates the LogoutRequest, terminates its own user session
and redirects the user to
https://thezuliporg.example.com/complete/saml/?SAMLRequest=<LogoutResponse>
with the appropriate LogoutResponse. In case of failure, the
LogoutResponse is expected to express that.
5. Zulip validates the LogoutResponse and if the response is a success
response, it executes the regular Zulip logout and the full flow is
finished.
Django 4.0 and higher began checking the `Origin` header, which made
it important that Zulip know accurately if the request came over HTTPS
or HTTP; failure to do so would result in "CSRF verification failed"
errors.
For Zulip servers which are accessed via proxies, this means that
`X-Fowarded-Proto` must be set accurately. Adjust the documentation
for the suggested configurations to add the header.
Fixes: #24599.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
1c76036c61 raised the number of `minfds` in Supervisor from 40k to
1M. If Supervisor cannot guarantee that number of available file
descriptors, it will fail to start; `/etc/security/limits.conf` was
hence adjusted upwards as well. However, on some virtualized
environments, including Proxmox LXC, setting
`/etc/security/limits.conf` may not be enough to raise the
system-level limits. This causes `supervisord` with the larger
`minfds` to fail to start.
The limit of 1000000 was chosen to be arbitrarily high, assuming it
came without cost; it is not expected to ever be reached on any
deployment. 262b19346e already lowered one aspect of that
changeset, upon determining it did come with a cost. Potentially
breaking virtualized deployments during upgrade is another cost of
that change.
Lower the `minfds` it back down to 40k, partially reverting
1c76036c61, but allow adjusting it upwards for extremely large
deployments. We do not expect any except the largest deployments to
ever hit the 40k limit, and a frictionless deployment for the
vanishingly small number of huge deployments is not worth the
potential upgrade hiccups for the much more frequent smaller
deployments.
Instead of copying over a mostly-unchanged `postgresql.conf`, we
transition to deploying a `conf.d/zulip.conf` which contains the
only material changes we made to the file, which were previously
appended to the end.
While shipping separate while `postgresql.conf` files for each
supported version is useful if there is large variety in supported
options between versions, there is not no such variation at current,
and the burden of overriding the entire default configuration is that
it must be keep up to date wit the package's version.
Taking backups on the database primary adds additional disk load,
which can impact the performance of the application.
Switch to taking backups on replicas, if they exist. Some deployments
may have multiple replicas, and taking backups on all of them is
wasteful and potentially confusing; add a flag to inhibit taking
nightly snapshots on the host.
If the deployment is a single instance of PostgreSQL, with no
replicas, it takes backups as before, modulo the extra flag to allow
skipping taking them.
Since logrotate runs in a daily cron, this practically means "daily,
but only if it's larger than 500M." For large installs with large
traffic, this is effectively daily for 10 days; for small installs, it
is an unknown amount of time.
Switch to daily logfiles, defaulting to 14 days to match nginx; this
can be overridden using a zulip.conf setting. This makes it easier to
ensure that access logs are only kept for a bounded period of time.
Increasing worker_connections has a memory cost, unlike the rest of
the changes in 1c76036c61d8; setting it to 1 million caused nginx to
consume several GB of memory.
Reduce the default down to 10k, and allow deploys to configure it up
if necessary. `worker_rlimit_nofile` is left at 1M, since it has no
impact on memory consumption.
This is the behaviour inherited from Django[^1]. While setting the
password to empty (`email_password = `) in
`/etc/zulip/zulip-secrets.conf` also would suffice, it's unclear what
the user would have been putting into `EMAIL_HOST_USER` in that
context.
Because we previously did not warn when `email_password` was not
present in `zulip-secrets.conf`, having the error message clarify the
correct configuration for disabling SMTP auth is important.
Fixes: #23938.
[^1]: https://docs.djangoproject.com/en/4.1/ref/settings/#std-setting-EMAIL_HOST_USER
The documentation for restoring backups referenced that it needed to
be to the same version of PostgreSQL, but did not explain how to do
that.
Link to the relevant section of the installer documentation, and name
the flag explicitly.
Fixes: #23691
These hooks are run immediately around the critical section of the
upgrade. If the upgrade fails for preparatory reasons, the pre-deploy
hook may not be run; if it fails during the upgrade, the post-deploy
hook will not be run. Hooks are called from the CWD of the new
deploy, with arguments of the old version and the new version. If
they exit with non-0 exit code, the deploy aborts.