Uploads are well-positioned to use S3's "intelligent tiering" storage
class. Add a setting to let uploaded files to declare their desired
storage class at upload time, and document how to move existing files
to the same storage class.
04cf68b45e make nginx responsible for downloading (and caching)
files from S3. As noted in that commit, nginx implements its own
non-blocking DNS resolver, since the base syscall is blocking, so
requires an explicit nameserver configuration. That commit used
127.0.0.53, which is provided by systemd-resolved, as the resolver.
However, that service may not always be enabled and running, and may
in fact not even be installed (e.g. on Docker). Switch to parsing
`/etc/resolv.conf` and using the first-provided nameserver. In many
deployments, this will still be `127.0.0.53`, but for others it will
provide a working DNS server which is external to the host.
In the event that a server is misconfigured and has no resolvers in
`/etc/resolv.conf`, it will error out:
```console
Error: Evaluation Error: Error while evaluating a Function Call, No nameservers found in /etc/resolv.conf! Configure one by setting application_server.nameserver in /etc/zulip/zulip.conf (file: /home/zulip/deployments/current/puppet/zulip/manifests/app_frontend_base.pp, line: 76, column: 70) on node example.zulipdev.org
```
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
Moving `/user_avatars/` to being served partially through Django
removes the need for the `no_serve_uploads` nginx reconfiguring when
switching between S3 and local backends. This is important because a
subsequent commit will move S3 attachments to being served through
nginx, which would make `no_serve_uploads` entirely nonsensical of a
name.
Serve the files through Django, with an offload for the actual image
response to an internal nginx route. In development, serve the files
directly in Django.
We do _not_ mark the contents as immutable for caching purposes, since
the path for avatar images is hashed only by their user-id and a salt,
and as such are reused when a user's avatar is updated.
The documentation included the full policy for the file uploads
bucket, but only one additional statement for the avatars bucket; the
reader needed to assemble the full policy themselves.
Switch to explicitly providing the full policy for both.
Fixes#23110.
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.
Boto3 does not allow setting the endpoint url from
the config file. Thus we create a django setting
variable (`S3_ENDPOINT_URL`) which is passed to
service clients and resources of `boto3.Session`.
We also update the uploads-backend documentation
and remove the config environment variable as now
AWS supports the SIGv4 signature format by default.
And the region name is passed as a parameter instead
of creating a config file for just this value.
Fixes#16246.
This section at the top was clearly written before the documentation
at the bottom existed, and hasn't been updated to point to the
now-existent docs below.
Add the link, rather than directing to #production-help.
- Updated 260+ links from ".html" to ".md" to reduce the number of issues
reported about hyperlinks not working when viewing docs on Github.
- Removed temporary workaround that suppressed all warnings reported
by sphinx build for every link ending in ".html".
Details:
The recent upgrade to recommonmark==0.5.0 supports auto-converting
".md" links to ".html" so that the resulting HTML output is correct.
Notice that links pointing to a heading i.e. "../filename.html#heading",
were not updated because recommonmark does not auto-convert them.
These links do not generate build warnings and do not cause any issues.
However, there are about ~100 such links that might still get misreported
as broken links. This will be a follow-up issue.
Background:
docs: pip upgrade recommonmark and CommonMark #13013
docs: Allow .md links between doc pages #11719Fixes#11087.
Previously, we were hardcoding the domain s3.amazonaws.com. Given
that we already have an interface for configuring the host in
/etc/zulip/boto.cfg (which in turn, automatically configures boto), we
just need to actually use the value configured in boto for what S3
hostname to use.
We don't have tests for this new use case, in part because they're
likely annoying to write with `moto` and there hasn't been a huge
amount of demand for it. Since this doesn't regress existing S3
backend support, it seems worth merging.
Sphinx/ReadTheDocs supports automatically translating links written as
to `.md` files to point to the corresponding `.html` files, so this
migration does not change the resulting HTML output in ReadTheDocs.
But it does fix apparent broken links on GitHub.
This doesn't prevent people from reading the documentation on GitHub
(so doesn't mitigate the fact that some rtd-specific syntax does not
render properly on GH), but it will prevent us from getting erroneous
issues reported about the hyperlinks not working.
Fixes: #11087.
This is required in some AWS regions.
The right long-term fix is to move to boto3 which doesn't have this
problem.
Allows us to downgrade the priority of #9376.
This moves the documentation for this feature out of
prod_settings_template.py, so that we can edit it more easily.
We also add a bucket policy, which is part of what one would want to
use this in production.
This addresses much, but not all, of #9361.