2018-05-16 23:34:34 +02:00
|
|
|
# File upload backends
|
|
|
|
|
|
|
|
Zulip in production supports a couple different backends for storing
|
|
|
|
files uploaded by users of the Zulip server (messages, profile
|
|
|
|
pictures, organization icons, custom emoji, etc.).
|
|
|
|
|
|
|
|
The default is the `LOCAL_UPLOADS_DIR` backend, which just stores
|
|
|
|
files on disk in the specified directory on the Zulip server.
|
|
|
|
Obviously, this backend doesn't work with multiple Zulip servers and
|
|
|
|
doesn't scale, but it's great for getting a Zulip server up and
|
2019-02-14 13:42:04 +01:00
|
|
|
running quickly. You can later migrate the uploads to S3 by
|
|
|
|
[following the instructions here](#migrating-from-local-uploads-to-amazon-s3-backend).
|
2018-05-16 23:34:34 +02:00
|
|
|
|
|
|
|
We also support an `S3` backend, which uses the Python `boto` library
|
2019-06-28 19:49:23 +02:00
|
|
|
to upload files to Amazon S3 (or an S3-compatible block storage
|
|
|
|
provider supported by the `boto` library).
|
2018-05-16 23:34:34 +02:00
|
|
|
|
|
|
|
## S3 backend configuration
|
|
|
|
|
|
|
|
Here, we document the process for configuring Zulip's S3 file upload
|
2021-08-20 21:53:28 +02:00
|
|
|
backend. To enable this backend, you need to do the following:
|
2018-05-16 23:34:34 +02:00
|
|
|
|
2018-05-17 16:20:07 +02:00
|
|
|
1. In the AWS management console, create a new IAM account (aka API
|
2021-08-20 22:54:08 +02:00
|
|
|
user) for your Zulip server, and two buckets in S3, one for uploaded
|
|
|
|
files included in messages, and another for user avatars. You need
|
|
|
|
two buckets because the "user avatars" bucket is generally configured
|
|
|
|
as world-readable, whereas the "uploaded files" one is not.
|
2018-05-17 16:20:07 +02:00
|
|
|
|
2018-05-16 23:34:34 +02:00
|
|
|
1. Set `s3_key` and `s3_secret_key` in /etc/zulip/zulip-secrets.conf
|
2019-06-28 19:49:23 +02:00
|
|
|
to be the S3 access and secret keys for the IAM account.
|
2022-03-23 21:47:53 +01:00
|
|
|
Alternately, if your Zulip server runs on an EC2 instance, set the
|
|
|
|
IAM role for the EC2 instance to the role.
|
2018-05-16 23:34:34 +02:00
|
|
|
|
|
|
|
1. Set the `S3_AUTH_UPLOADS_BUCKET` and `S3_AVATAR_BUCKET` settings in
|
2019-06-28 19:49:23 +02:00
|
|
|
`/etc/zulip/settings.py` to be the names of the S3 buckets you
|
2020-10-22 23:32:45 +02:00
|
|
|
created (e.g. `"exampleinc-zulip-uploads"`).
|
2018-05-16 23:34:34 +02:00
|
|
|
|
|
|
|
1. Comment out the `LOCAL_UPLOADS_DIR` setting in
|
2019-06-28 19:49:23 +02:00
|
|
|
`/etc/zulip/settings.py` (add a `#` at the start of the line).
|
|
|
|
|
2020-10-22 23:32:45 +02:00
|
|
|
1. If you are using a non-AWS block storage provider,
|
|
|
|
you need to set the `S3_ENDPOINT_URL` setting to your
|
|
|
|
endpoint url (e.g. `"https://s3.eu-central-1.amazonaws.com"`).
|
2018-11-28 21:23:57 +01:00
|
|
|
|
2020-10-22 23:32:45 +02:00
|
|
|
For certain AWS regions, you may need to set the `S3_REGION`
|
|
|
|
setting to your default AWS region's code (e.g. `"eu-central-1"`).
|
2019-02-10 19:22:34 +01:00
|
|
|
|
2018-05-16 23:34:34 +02:00
|
|
|
1. Finally, restart the Zulip server so that your settings changes
|
|
|
|
take effect
|
|
|
|
(`/home/zulip/deployments/current/scripts/restart-server`).
|
|
|
|
|
|
|
|
It's simplest to just do this configuration when setting up your Zulip
|
2021-08-20 21:53:28 +02:00
|
|
|
server for production usage. Note that if you had any existing
|
2020-05-18 23:24:06 +02:00
|
|
|
uploading files, this process does not upload them to Amazon S3; see
|
|
|
|
[migration instructions](#migrating-from-local-uploads-to-amazon-s3-backend)
|
|
|
|
below for those steps.
|
2018-05-16 23:34:34 +02:00
|
|
|
|
uploads: Serve S3 uploads directly from nginx.
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
2022-11-22 20:41:35 +01:00
|
|
|
## S3 local caching
|
|
|
|
|
|
|
|
For performance reasons, Zulip stores a cache of recently served user
|
|
|
|
uploads on disk locally, even though the durable storage is kept in
|
|
|
|
S3. There are a number of parameters which control the size and usage
|
|
|
|
of this cache, which is maintained by nginx:
|
|
|
|
|
|
|
|
- `s3_memory_cache_size` controls the in-memory size of the cache
|
|
|
|
_index_; the default is 1MB, which is enough to store about 8 thousand
|
|
|
|
entries.
|
|
|
|
- `s3_disk_cache_size` controls the on-disk size of the cache
|
|
|
|
_contents_; the default is 200MB.
|
|
|
|
- `s3_cache_inactive_time` controls the longest amount of time an
|
|
|
|
entry will be cached since last use; the default is 30 days. Since
|
|
|
|
the contents of the cache are immutable, this serves only as a
|
|
|
|
potential additional limit on the size of the contents on disk;
|
|
|
|
`s3_disk_cache_size` is expected to be the primary control for cache
|
|
|
|
sizing.
|
|
|
|
|
|
|
|
These defaults are likely sufficient for small-to-medium deployments.
|
|
|
|
Large deployments, or deployments with image-heavy use cases, will
|
|
|
|
want to increase `s3_disk_cache_size`, potentially to be several
|
|
|
|
gigabytes. `s3_memory_cache_size` should potentially be increased,
|
|
|
|
based on estimating the number of files that the larger disk cache
|
|
|
|
will hold.
|
|
|
|
|
|
|
|
You may also wish to increase the cache sizes if the S3 storage (or
|
|
|
|
S3-compatible equivalent) is not closely located to your Zulip server,
|
|
|
|
as cache misses will be more expensive.
|
|
|
|
|
puppet: Read resolver from /etc/resolv.conf.
04cf68b45ebb make nginx responsible for downloading (and caching)
files from S3. As noted in that commit, nginx implements its own
non-blocking DNS resolver, since the base syscall is blocking, so
requires an explicit nameserver configuration. That commit used
127.0.0.53, which is provided by systemd-resolved, as the resolver.
However, that service may not always be enabled and running, and may
in fact not even be installed (e.g. on Docker). Switch to parsing
`/etc/resolv.conf` and using the first-provided nameserver. In many
deployments, this will still be `127.0.0.53`, but for others it will
provide a working DNS server which is external to the host.
In the event that a server is misconfigured and has no resolvers in
`/etc/resolv.conf`, it will error out:
```console
Error: Evaluation Error: Error while evaluating a Function Call, No nameservers found in /etc/resolv.conf! Configure one by setting application_server.nameserver in /etc/zulip/zulip.conf (file: /home/zulip/deployments/current/puppet/zulip/manifests/app_frontend_base.pp, line: 76, column: 70) on node example.zulipdev.org
```
2023-06-08 21:30:41 +02:00
|
|
|
## nginx DNS nameserver configuration
|
|
|
|
|
|
|
|
The S3 cache described above is maintained by nginx. nginx's configuration
|
|
|
|
requires an explicitly-set DNS nameserver to resolve the hostname of the S3
|
|
|
|
servers; Zulip defaults this value to the first nameserver found in
|
|
|
|
`/etc/resolv.conf`, but this resolver can be [adjusted in
|
|
|
|
`/etc/zulip/zulip.conf`][s3-resolver] if needed. If you adjust this value, you
|
|
|
|
will need to run `/home/zulip/deployments/current/scripts/zulip-puppet-apply` to
|
|
|
|
update the nginx configuration for the new value.
|
|
|
|
|
2024-02-16 23:07:19 +01:00
|
|
|
[s3-resolver]: system-configuration.md#nameserver
|
puppet: Read resolver from /etc/resolv.conf.
04cf68b45ebb make nginx responsible for downloading (and caching)
files from S3. As noted in that commit, nginx implements its own
non-blocking DNS resolver, since the base syscall is blocking, so
requires an explicit nameserver configuration. That commit used
127.0.0.53, which is provided by systemd-resolved, as the resolver.
However, that service may not always be enabled and running, and may
in fact not even be installed (e.g. on Docker). Switch to parsing
`/etc/resolv.conf` and using the first-provided nameserver. In many
deployments, this will still be `127.0.0.53`, but for others it will
provide a working DNS server which is external to the host.
In the event that a server is misconfigured and has no resolvers in
`/etc/resolv.conf`, it will error out:
```console
Error: Evaluation Error: Error while evaluating a Function Call, No nameservers found in /etc/resolv.conf! Configure one by setting application_server.nameserver in /etc/zulip/zulip.conf (file: /home/zulip/deployments/current/puppet/zulip/manifests/app_frontend_base.pp, line: 76, column: 70) on node example.zulipdev.org
```
2023-06-08 21:30:41 +02:00
|
|
|
|
2018-05-16 23:34:34 +02:00
|
|
|
## S3 bucket policy
|
|
|
|
|
2022-11-18 18:05:56 +01:00
|
|
|
The best way to do the S3 integration with Amazon is to create a new IAM user
|
|
|
|
just for your Zulip server with limited permissions. For both the user uploads
|
|
|
|
bucket and the user avatars bucket, you'll need to adjust the [S3 bucket
|
|
|
|
policy](https://awspolicygen.s3.amazonaws.com/policygen.html).
|
|
|
|
|
|
|
|
The file uploads bucket should have a policy of:
|
2018-05-16 23:34:34 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```json
|
2018-05-16 23:34:34 +02:00
|
|
|
{
|
|
|
|
"Version": "2012-10-17",
|
2022-11-18 18:08:14 +01:00
|
|
|
"Id": "Policy1468991802320",
|
2018-05-16 23:34:34 +02:00
|
|
|
"Statement": [
|
|
|
|
{
|
2022-11-18 18:08:14 +01:00
|
|
|
"Sid": "Stmt1468991795370",
|
2018-05-16 23:34:34 +02:00
|
|
|
"Effect": "Allow",
|
|
|
|
"Principal": {
|
|
|
|
"AWS": "ARN_PRINCIPAL_HERE"
|
|
|
|
},
|
|
|
|
"Action": [
|
|
|
|
"s3:GetObject",
|
|
|
|
"s3:DeleteObject",
|
|
|
|
"s3:PutObject"
|
|
|
|
],
|
|
|
|
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*"
|
|
|
|
},
|
|
|
|
{
|
2022-11-18 18:08:14 +01:00
|
|
|
"Sid": "Stmt1468991795371",
|
2018-05-16 23:34:34 +02:00
|
|
|
"Effect": "Allow",
|
|
|
|
"Principal": {
|
|
|
|
"AWS": "ARN_PRINCIPAL_HERE"
|
|
|
|
},
|
|
|
|
"Action": "s3:ListBucket",
|
|
|
|
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2022-11-18 18:05:56 +01:00
|
|
|
The file-uploads bucket should not be world-readable. See the
|
|
|
|
[documentation on the Zulip security model](security-model.md) for
|
|
|
|
details on the security model for uploaded files.
|
|
|
|
|
|
|
|
However, the avatars bucket is intended to be world-readable, so its
|
|
|
|
policy should be:
|
2018-05-16 23:34:34 +02:00
|
|
|
|
2021-08-20 07:09:04 +02:00
|
|
|
```json
|
2018-05-16 23:34:34 +02:00
|
|
|
{
|
2022-11-18 18:05:56 +01:00
|
|
|
"Version": "2012-10-17",
|
2022-11-18 18:08:14 +01:00
|
|
|
"Id": "Policy1468991802321",
|
2022-11-18 18:05:56 +01:00
|
|
|
"Statement": [
|
|
|
|
{
|
|
|
|
"Sid": "Stmt1468991795380",
|
|
|
|
"Effect": "Allow",
|
|
|
|
"Principal": {
|
|
|
|
"AWS": "ARN_PRINCIPAL_HERE"
|
|
|
|
},
|
|
|
|
"Action": [
|
|
|
|
"s3:GetObject",
|
|
|
|
"s3:DeleteObject",
|
|
|
|
"s3:PutObject"
|
|
|
|
],
|
|
|
|
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"Sid": "Stmt1468991795381",
|
|
|
|
"Effect": "Allow",
|
|
|
|
"Principal": {
|
|
|
|
"AWS": "ARN_PRINCIPAL_HERE"
|
|
|
|
},
|
|
|
|
"Action": "s3:ListBucket",
|
|
|
|
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"Sid": "Stmt1468991795382",
|
|
|
|
"Effect": "Allow",
|
|
|
|
"Principal": {
|
|
|
|
"AWS": "*"
|
|
|
|
},
|
|
|
|
"Action": "s3:GetObject",
|
|
|
|
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*"
|
|
|
|
}
|
|
|
|
]
|
2018-05-16 23:34:34 +02:00
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2019-02-14 13:42:04 +01:00
|
|
|
## Migrating from local uploads to Amazon S3 backend
|
|
|
|
|
|
|
|
As you scale your server, you might want to migrate the uploads from
|
2021-08-20 21:53:28 +02:00
|
|
|
your local backend to Amazon S3. Follow these instructions, step by
|
2019-02-14 13:42:04 +01:00
|
|
|
step, to do the migration.
|
|
|
|
|
2020-08-11 01:47:54 +02:00
|
|
|
1. First, [set up the S3 backend](#s3-backend-configuration) in the settings
|
2021-08-20 22:54:08 +02:00
|
|
|
(all the auth stuff), but leave `LOCAL_UPLOADS_DIR` set -- the
|
|
|
|
migration tool will need that value to know where to find your uploads.
|
2019-02-14 13:42:04 +01:00
|
|
|
2. Run `./manage.py transfer_uploads_to_s3`. This will upload all the
|
2021-08-20 22:54:08 +02:00
|
|
|
files from the local uploads directory to Amazon S3. By default,
|
|
|
|
this command runs on 6 parallel processes, since uploading is a
|
|
|
|
latency-sensitive operation. You can control this parameter using
|
|
|
|
the `--processes` option.
|
2020-03-17 13:57:10 +01:00
|
|
|
3. Once the transfer script completes, disable `LOCAL_UPLOADS_DIR`, and
|
2021-08-20 22:54:08 +02:00
|
|
|
restart your server (continuing the last few steps of the S3
|
|
|
|
backend setup instructions).
|
2019-02-14 13:42:04 +01:00
|
|
|
|
2021-08-20 21:53:28 +02:00
|
|
|
Congratulations! Your uploaded files are now migrated to S3.
|
2019-02-14 13:42:04 +01:00
|
|
|
|
|
|
|
**Caveat**: The current version of this tool does not migrate an
|
2021-08-20 22:54:08 +02:00
|
|
|
uploaded organization avatar or logo.
|
2023-07-19 04:27:03 +02:00
|
|
|
|
|
|
|
## S3 data storage class
|
|
|
|
|
|
|
|
In general, uploaded files in Zulip are accessed frequently at first, and then
|
|
|
|
age out of frequent access. The S3 backend provides the [S3
|
|
|
|
Intelligent-Tiering][s3-it] [storage class][s3-storage-class] which provides
|
|
|
|
cheaper storage for less frequently accessed objects, and may provide overall
|
|
|
|
cost savings for large deployments.
|
|
|
|
|
|
|
|
You can configure Zulip to store uploaded files using Intelligent-Tiering by
|
|
|
|
setting `S3_UPLOADS_STORAGE_CLASS` to `INTELLIGENT_TIERING` in `settings.py`.
|
|
|
|
This setting can take any of the following [storage class
|
|
|
|
value][s3-storage-class-constant] values:
|
|
|
|
|
|
|
|
- `STANDARD`
|
|
|
|
- `STANDARD_IA`
|
|
|
|
- `ONEZONE_IA`
|
|
|
|
- `REDUCED_REDUNDANCY`
|
|
|
|
- `GLACIER_IR`
|
|
|
|
- `INTELLIGENT_TIERING`
|
|
|
|
|
|
|
|
Setting `S3_UPLOADS_STORAGE_CLASS` does not affect the storage class of existing
|
|
|
|
objects. In order to change those, for example to `INTELLIGENT_TIERING`, perform
|
|
|
|
an in-place copy:
|
|
|
|
|
|
|
|
aws s3 cp --storage-class INTELLIGENT_TIERING --recursive \
|
|
|
|
s3://your-bucket-name/ s3://your-bucket-name/
|
|
|
|
|
|
|
|
Note that changing the lifecycle of existing objects will incur a [one-time
|
|
|
|
lifecycle transition cost][s3-pricing].
|
|
|
|
|
|
|
|
[s3-it]: https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
|
|
|
|
[s3-storage-class]: https://aws.amazon.com/s3/storage-classes/
|
|
|
|
[s3-storage-class-constant]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass
|
|
|
|
[s3-pricing]: https://aws.amazon.com/s3/pricing/
|