We will hopefully be able to just this in #16208 to document what
users need to configure in order to do this manually, but the content
here will be useful for anyone who hasn't set that up regardless.
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
- Renames "Customize Zulip" to "Server configuration".
- Cross-links "Server configuration" with "System and deployment
configuration".
Fixes part of #23984.
The `postfix.mailname` setting in `/etc/zulip.conf` was previously
only used for incoming mail, to identify in Postfix configuration
which messages were "local."
Also set `/etc/mailname`, which is used by Postfix to set how it
identifies to other hosts when sending outgoing email.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
Moving `/user_avatars/` to being served partially through Django
removes the need for the `no_serve_uploads` nginx reconfiguring when
switching between S3 and local backends. This is important because a
subsequent commit will move S3 attachments to being served through
nginx, which would make `no_serve_uploads` entirely nonsensical of a
name.
Serve the files through Django, with an offload for the actual image
response to an internal nginx route. In development, serve the files
directly in Django.
We do _not_ mark the contents as immutable for caching purposes, since
the path for avatar images is hashed only by their user-id and a salt,
and as such are reused when a user's avatar is updated.
The authenticate_by_username limit of 5 attempts per 30 minutes can get
annoying in some cases where the user really forgot their password and
should be allowed to keep trying with admin approvial - so we should
document the command that allows unblocking them.
The documentation included the full policy for the file uploads
bucket, but only one additional statement for the avatars bucket; the
reader needed to assemble the full policy themselves.
Switch to explicitly providing the full policy for both.
Fixes#23110.
Since this is being moved to admin-facing documentation, also adds a
paragraph about the main concern with enabling this on a server that's
not zulip.com.
We should rearrange Zulip's developer docs to make it easier to
find the documentation that new contributors need.
Name changes
Rename "Code contribution guide" section -> "Contributing to Zulip".
Rename "Contributing to Zulip" page -> "Contributing guide".
Organizational changes to the newly-named "Contributing to Zulip":
Move up "Contributing to Zulip", as the third link in sidebar index.
Move up renamed "Contributing guide" page to the top of this section.
Move up "Zulip code of Conduct", as the second link of this section.
Move down "Licensing", as the last link of this section.
Move "Accessibility" just below "HTML and CSS" in Subsystems section.
Update all links according to the changes above.
Redirects should be added as needed.
Fixes: #22517.
This is not a feature intended to be used outside zulip.com, since it
just sets your server to have the zulip.com landing pages. I think
it's only been turned on by people who were confused by this text.
The default value in uwsgi is 4k; receiving more than this amount from
nginx leads to a 502 response (though, happily, the backend uwsgi does not
terminate).
ab18dbfde5 originally increased it from the unstated uwsgi default
of 4096, to 8192; b1da797955 made it configurable, in order to allow
requests from clients with many cookies, without causing 502's[1].
nginx defaults to a limitation of 1k, with 4 additional 8k header
lines allowed[2]; any request larger than that returns a response of
`400 Request Header Or Cookie Too Large`. The largest header size
theoretically possible from nginx, by default, is thus 33k, though
that would require packing four separate headers to exactly 8k each.
Remove the gap between nginx's limit and uwsgi's, which could trigger
502s, by removing the uwsgi configurability, and setting a 64k size in
uwsgi (the max allowable), which is larger than nginx's default limit.
uWSGI's documentation of `buffer-size` ([3], [4]) also notes that "It
is a security measure too, so adapt to your app needs instead of
maxing it out." Python has no security issues with buffers of 64k,
and there is no appreciable memory footprint difference to having a
larger buffer available in uwsgi.
[1]: https://chat.zulip.org/#narrow/stream/31-production-help/topic/works.20in.20Edge.20not.20Chrome/near/719523
[2]: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
[3]: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
[4]: https://uwsgi-docs.readthedocs.io/en/latest/Options.html#buffer-size
Backups are written every 16k of WAL archive, and by default do not
have an upper limit on how out of date they are, as `archive_timeout`
defaults to 0.
Also emphasize that these are streaming backups, not just one
point-in-time backup daily.
Fixes#21976.
Notable changes:
- Describe `X-Forwarded-For` by name.
- Switch each specific proxy to numbered steps.
- Link back to the `X-Forwarded-For` section in each proxy
- Default to using HTTPS, not HTTP, for the backend.
- Include the HTTP-to-HTTPS redirect code for all proxies; it is
important that it happen at the proxy, as the backend is unaware of
it.
- Call out Apache2 modules which are necessary.
- Specify where the dhparam.pem file can be found.
- Call out the `Host:` header forwarding necessary, and document
`USE_X_FORWARDED_HOST` if that is not possible.
- Standardize on 20 minutes of connection timeout.
As a consequence:
• Bump minimum supported Python version to 3.8.
• Move Vagrant environment to Ubuntu 20.04, which has Python 3.8.
• Move CI frontend tests to Ubuntu 20.04.
• Move production build test to Ubuntu 20.04.
• Move 3.4 upgrade test to Ubuntu 20.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
On the Debian 10 -> 11 upgrade, the server is running Zulip 4.x, which
lets us pass `--audit-fts-indexes` to `upgrade-zulip-stage-2` rather
than run the command as a separate step.
The reindex-textual-data tool needs the venv to be cable to run;
switch the order of the last two steps, making them now match the
Debian 9 -> 10 and 10 -> upgrades.
Ref #21296.
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.