mirror of https://github.com/zulip/zulip.git
uploads: Serve S3 uploads directly from nginx.
When file uploads are stored in S3, this means that Zulip serves as a 302 to S3. Because browsers do not cache redirects, this means that no image contents can be cached -- and upon every page load or reload, every recently-posted image must be re-fetched. This incurs extra load on the Zulip server, as well as potentially excessive bandwidth usage from S3, and on the client's connection. Switch to fetching the content from S3 in nginx, and serving the content from nginx. These have `Cache-control: private, immutable` headers set on the response, allowing browsers to cache them locally. Because nginx fetching from S3 can be slow, and requests for uploads will generally be bunched around when a message containing them are first posted, we instruct nginx to cache the contents locally. This is safe because uploaded file contents are immutable; access control is still mediated by Django. The nginx cache key is the URL without query parameters, as those parameters include a time-limited signed authentication parameter which lets nginx fetch the non-public file. This adds a number of nginx-level configuration parameters to control the caching which nginx performs, including the amount of in-memory index for he cache, the maximum storage of the cache on disk, and how long data is retained in the cache. The currently-chosen figures are reasonable for small to medium deployments. The most notable effect of this change is in allowing browsers to cache uploaded image content; however, while there will be many fewer requests, it also has an improvement on request latency. The following tests were done with a non-AWS client in SFO, a server and S3 storage in us-east-1, and with 100 requests after 10 requests of warm-up (to fill the nginx cache). The mean and standard deviation are shown. | | Redirect to S3 | Caching proxy, hot | Caching proxy, cold | | ----------------- | ------------------- | ------------------- | ------------------- | | Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms | | Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms | | Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms | The hot-cache performance is faster for both large and small files, since it saves the client the time having to make a second request to a separate host. This performance improvement remains at least 100ms even if the client is on the same coast as the server. Cold nginx caches are only slightly slower than hot caches, because VPC access to S3 endpoints is extremely fast (assuming it is in the same region as the host), and nginx can pool connections to S3 and reuse them. However, all of the 648ms taken to serve a cold-cache large file is occupied in nginx, as opposed to the only 263ms which was spent in nginx when using redirects to S3. This means that to overall spend less time responding to uploaded-file requests in nginx, clients will need to find files in their local cache, and skip making an uploaded-file request, at least 60% of the time. Modeling shows a reduction in the number of client requests by about 70% - 80%. The `Content-Disposition` header logic can now also be entirely shared with the local-file codepath, as can the `url_only` path used by mobile clients. While we could provide the direct-to-S3 temporary signed URL to mobile clients, we choose to provide the served-from-Zulip signed URL, to better control caching headers on it, and greater consistency. In doing so, we adjust the salt used for the URL; since these URLs are only valid for 60s, the effect of this salt change is minimal.
This commit is contained in:
parent
58dc1059f3
commit
04cf68b45e
|
@ -684,6 +684,29 @@ all at once. This decreases the number of 502's served to clients, at
|
|||
the cost of slightly increased memory usage, and the possibility that
|
||||
different requests will be served by different versions of the code.
|
||||
|
||||
#### `s3_memory_cache_size`
|
||||
|
||||
Used only when the [S3 storage backend][s3-backend] is in use.
|
||||
Controls the in-memory size of the cache _index_; the default is 1MB,
|
||||
which is enough to store about 8 thousand entries.
|
||||
|
||||
#### `s3_disk_cache_size`
|
||||
|
||||
Used only when the [S3 storage backend][s3-backend] is in use.
|
||||
Controls the on-disk size of the cache _contents_; the default is
|
||||
200MB.
|
||||
|
||||
#### `s3_cache_inactive_time`
|
||||
|
||||
Used only when the [S3 storage backend][s3-backend] is in use.
|
||||
Controls the longest amount of time an entry will be cached since last
|
||||
use; the default is 30 days. Since the contents of the cache are
|
||||
immutable, this serves only as a potential additional limit on the
|
||||
size of the contents on disk; `s3_disk_cache_size` is expected to be
|
||||
the primary control for cache sizing.
|
||||
|
||||
[s3-backend]: upload-backends.md
|
||||
|
||||
#### `uwsgi_listen_backlog_limit`
|
||||
|
||||
Override the default uwsgi backlog of 128 connections.
|
||||
|
|
|
@ -55,6 +55,36 @@ uploading files, this process does not upload them to Amazon S3; see
|
|||
[migration instructions](#migrating-from-local-uploads-to-amazon-s3-backend)
|
||||
below for those steps.
|
||||
|
||||
## S3 local caching
|
||||
|
||||
For performance reasons, Zulip stores a cache of recently served user
|
||||
uploads on disk locally, even though the durable storage is kept in
|
||||
S3. There are a number of parameters which control the size and usage
|
||||
of this cache, which is maintained by nginx:
|
||||
|
||||
- `s3_memory_cache_size` controls the in-memory size of the cache
|
||||
_index_; the default is 1MB, which is enough to store about 8 thousand
|
||||
entries.
|
||||
- `s3_disk_cache_size` controls the on-disk size of the cache
|
||||
_contents_; the default is 200MB.
|
||||
- `s3_cache_inactive_time` controls the longest amount of time an
|
||||
entry will be cached since last use; the default is 30 days. Since
|
||||
the contents of the cache are immutable, this serves only as a
|
||||
potential additional limit on the size of the contents on disk;
|
||||
`s3_disk_cache_size` is expected to be the primary control for cache
|
||||
sizing.
|
||||
|
||||
These defaults are likely sufficient for small-to-medium deployments.
|
||||
Large deployments, or deployments with image-heavy use cases, will
|
||||
want to increase `s3_disk_cache_size`, potentially to be several
|
||||
gigabytes. `s3_memory_cache_size` should potentially be increased,
|
||||
based on estimating the number of files that the larger disk cache
|
||||
will hold.
|
||||
|
||||
You may also wish to increase the cache sizes if the S3 storage (or
|
||||
S3-compatible equivalent) is not closely located to your Zulip server,
|
||||
as cache misses will be more expensive.
|
||||
|
||||
## S3 bucket policy
|
||||
|
||||
The best way to do the S3 integration with Amazon is to create a new IAM user
|
||||
|
|
|
@ -1,4 +1,44 @@
|
|||
location /internal/uploads {
|
||||
# Handle redirects to S3
|
||||
location ~ ^/internal/s3/([^/]+)/(.*) {
|
||||
internal;
|
||||
include /etc/nginx/zulip-include/headers;
|
||||
|
||||
set $download_url https://$1/$2;
|
||||
proxy_set_header Host $1;
|
||||
|
||||
# Ensure that we only get _one_ of these headers: the one that
|
||||
# Django added, not the one from S3.
|
||||
proxy_hide_header Content-Disposition;
|
||||
proxy_hide_header Cache-Control;
|
||||
proxy_hide_header Expires;
|
||||
proxy_hide_header Set-Cookie;
|
||||
# We are _leaving_ S3 to provide Content-Type and Accept-Ranges
|
||||
# headers, which are the two remaining headers which nginx would
|
||||
# also pass through from the first response. Django explicitly
|
||||
# unsets the former, and does not set the latter.
|
||||
|
||||
# nginx does its own DNS resolution, which is necessary here to
|
||||
# resolve the IP of the S3 server. Point it at the local caching
|
||||
# systemd resolved service. The validity duration is set to match
|
||||
# S3's DNS validity.
|
||||
resolver 127.0.0.53 valid=300s;
|
||||
resolver_timeout 10s;
|
||||
|
||||
proxy_pass $download_url$is_args$args;
|
||||
proxy_cache uploads;
|
||||
# If the S3 response doesn't contain Cache-Control headers (which
|
||||
# we don't expect it to) then we assume they are valid for a very
|
||||
# long time. The size of the cache is controlled by
|
||||
# `s3_disk_cache_size` and read frequency, set via
|
||||
# `s3_cache_inactive_time`.
|
||||
proxy_cache_valid 200 1y;
|
||||
# Don't include query parameters in the cache key, since those
|
||||
# include a time-based auth token
|
||||
proxy_cache_key $download_url;
|
||||
}
|
||||
|
||||
# Internal file-serving
|
||||
location /internal/local/uploads {
|
||||
internal;
|
||||
include /etc/nginx/zulip-include/api_headers;
|
||||
add_header Content-Security-Policy "default-src 'none'; style-src 'self' 'unsafe-inline'; img-src 'self'; object-src 'self'; plugin-types application/pdf;";
|
||||
|
@ -6,7 +46,7 @@ location /internal/uploads {
|
|||
alias /home/zulip/uploads/files;
|
||||
}
|
||||
|
||||
location /internal/user_avatars {
|
||||
location /internal/local/user_avatars {
|
||||
internal;
|
||||
include /etc/nginx/zulip-include/headers;
|
||||
add_header Content-Security-Policy "default-src 'none' img-src 'self'";
|
||||
|
|
|
@ -69,6 +69,18 @@ class zulip::app_frontend_base {
|
|||
notify => Service['nginx'],
|
||||
}
|
||||
|
||||
$s3_memory_cache_size = zulipconf('application_server', 's3_memory_cache_size', '1M')
|
||||
$s3_disk_cache_size = zulipconf('application_server', 's3_disk_cache_size', '200M')
|
||||
$s3_cache_inactive_time = zulipconf('application_server', 's3_cache_inactive_time', '30d')
|
||||
file { '/etc/nginx/zulip-include/s3-cache':
|
||||
require => [Package[$zulip::common::nginx], File['/srv/zulip-uploaded-files-cache']],
|
||||
owner => 'root',
|
||||
group => 'root',
|
||||
mode => '0644',
|
||||
content => template('zulip/nginx/s3-cache.template.erb'),
|
||||
notify => Service['nginx'],
|
||||
}
|
||||
|
||||
file { '/etc/nginx/zulip-include/app.d/uploads-internal.conf':
|
||||
ensure => file,
|
||||
require => Package[$zulip::common::nginx],
|
||||
|
@ -200,7 +212,12 @@ class zulip::app_frontend_base {
|
|||
group => 'zulip',
|
||||
mode => '0755',
|
||||
}
|
||||
|
||||
file { '/srv/zulip-uploaded-files-cache':
|
||||
ensure => directory,
|
||||
owner => 'zulip',
|
||||
group => 'zulip',
|
||||
mode => '0755',
|
||||
}
|
||||
file { '/var/log/zulip/queue_error':
|
||||
ensure => directory,
|
||||
owner => 'zulip',
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
# This cache is only used if S3 file storage is configured.
|
||||
proxy_cache_path /srv/zulip-uploaded-files-cache
|
||||
levels=1:2
|
||||
keys_zone=uploads:<%= @s3_memory_cache_size %>
|
||||
inactive=<%= @s3_cache_inactive_time %>
|
||||
max_size=<%= @s3_disk_cache_size %>;
|
|
@ -12,9 +12,11 @@ server {
|
|||
}
|
||||
<% end -%>
|
||||
|
||||
include /etc/nginx/zulip-include/s3-cache;
|
||||
include /etc/nginx/zulip-include/upstreams;
|
||||
include /etc/zulip/nginx_sharding_map.conf;
|
||||
|
||||
|
||||
server {
|
||||
<% if @nginx_http_only -%>
|
||||
listen <%= @nginx_listen_port %>;
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
include /etc/nginx/zulip-include/s3-cache;
|
||||
include /etc/nginx/zulip-include/upstreams;
|
||||
include /etc/zulip/nginx_sharding_map.conf;
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ server {
|
|||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
include /etc/nginx/zulip-include/s3-cache;
|
||||
include /etc/nginx/zulip-include/upstreams;
|
||||
include /etc/zulip/nginx_sharding_map.conf;
|
||||
|
||||
|
|
|
@ -91,7 +91,7 @@ def upload_image_to_s3(
|
|||
)
|
||||
|
||||
|
||||
def get_signed_upload_url(path: str, download: bool = False) -> str:
|
||||
def get_signed_upload_url(path: str) -> str:
|
||||
client = boto3.client(
|
||||
"s3",
|
||||
aws_access_key_id=settings.S3_KEY,
|
||||
|
@ -99,16 +99,13 @@ def get_signed_upload_url(path: str, download: bool = False) -> str:
|
|||
region_name=settings.S3_REGION,
|
||||
endpoint_url=settings.S3_ENDPOINT_URL,
|
||||
)
|
||||
params = {
|
||||
"Bucket": settings.S3_AUTH_UPLOADS_BUCKET,
|
||||
"Key": path,
|
||||
}
|
||||
if download:
|
||||
params["ResponseContentDisposition"] = "attachment"
|
||||
|
||||
return client.generate_presigned_url(
|
||||
ClientMethod="get_object",
|
||||
Params=params,
|
||||
Params={
|
||||
"Bucket": settings.S3_AUTH_UPLOADS_BUCKET,
|
||||
"Key": path,
|
||||
},
|
||||
ExpiresIn=SIGNED_UPLOAD_URL_DURATION,
|
||||
HttpMethod="GET",
|
||||
)
|
||||
|
|
|
@ -949,7 +949,7 @@ class FileUploadTest(UploadSerializeMixin, ZulipTestCase):
|
|||
test_run, worker = os.path.split(os.path.dirname(settings.LOCAL_UPLOADS_DIR))
|
||||
self.assertEqual(
|
||||
response["X-Accel-Redirect"],
|
||||
"/internal/uploads/" + fp_path + "/" + name_str_for_test,
|
||||
"/internal/local/uploads/" + fp_path + "/" + name_str_for_test,
|
||||
)
|
||||
if content_disposition != "":
|
||||
self.assertIn("attachment;", response["Content-disposition"])
|
||||
|
@ -1882,7 +1882,7 @@ class LocalStorageTest(UploadSerializeMixin, ZulipTestCase):
|
|||
result = self.client_get(url)
|
||||
self.assertEqual(result.status_code, 200)
|
||||
internal_redirect_path = urlparse(url).path.replace(
|
||||
"/user_avatars/", "/internal/user_avatars/"
|
||||
"/user_avatars/", "/internal/local/user_avatars/"
|
||||
)
|
||||
self.assertEqual(result["X-Accel-Redirect"], internal_redirect_path)
|
||||
self.assertEqual(b"", result.content)
|
||||
|
@ -2098,20 +2098,31 @@ class S3Test(ZulipTestCase):
|
|||
uri = response_dict["uri"]
|
||||
self.assertEqual(base, uri[: len(base)])
|
||||
|
||||
# In development, this is just a redirect
|
||||
response = self.client_get(uri)
|
||||
redirect_url = response["Location"]
|
||||
path = urllib.parse.urlparse(redirect_url).path
|
||||
assert path.startswith("/")
|
||||
key = path[1:]
|
||||
key = path[len("/") :]
|
||||
self.assertEqual(b"zulip!", bucket.Object(key).get()["Body"].read())
|
||||
|
||||
prefix = f"/internal/s3/{settings.S3_AUTH_UPLOADS_BUCKET}.s3.amazonaws.com/"
|
||||
with self.settings(DEVELOPMENT=False):
|
||||
response = self.client_get(uri)
|
||||
redirect_url = response["X-Accel-Redirect"]
|
||||
path = urllib.parse.urlparse(redirect_url).path
|
||||
assert path.startswith(prefix)
|
||||
key = path[len(prefix) :]
|
||||
self.assertEqual(b"zulip!", bucket.Object(key).get()["Body"].read())
|
||||
|
||||
# Check the download endpoint
|
||||
download_uri = uri.replace("/user_uploads/", "/user_uploads/download/")
|
||||
response = self.client_get(download_uri)
|
||||
redirect_url = response["Location"]
|
||||
with self.settings(DEVELOPMENT=False):
|
||||
response = self.client_get(download_uri)
|
||||
redirect_url = response["X-Accel-Redirect"]
|
||||
path = urllib.parse.urlparse(redirect_url).path
|
||||
assert path.startswith("/")
|
||||
key = path[1:]
|
||||
assert path.startswith(prefix)
|
||||
key = path[len(prefix) :]
|
||||
self.assertEqual(b"zulip!", bucket.Object(key).get()["Body"].read())
|
||||
|
||||
# Now try the endpoint that's supposed to return a temporary URL for access
|
||||
|
@ -2119,14 +2130,25 @@ class S3Test(ZulipTestCase):
|
|||
result = self.client_get("/json" + uri)
|
||||
data = self.assert_json_success(result)
|
||||
url_only_url = data["url"]
|
||||
path = urllib.parse.urlparse(url_only_url).path
|
||||
assert path.startswith("/")
|
||||
key = path[1:]
|
||||
|
||||
self.assertNotEqual(url_only_url, uri)
|
||||
self.assertIn("user_uploads/temporary/", url_only_url)
|
||||
self.assertTrue(url_only_url.endswith("zulip.txt"))
|
||||
# The generated URL has a token authorizing the requestor to access the file
|
||||
# without being logged in.
|
||||
self.logout()
|
||||
with self.settings(DEVELOPMENT=False):
|
||||
self.client_get(url_only_url)
|
||||
redirect_url = response["X-Accel-Redirect"]
|
||||
path = urllib.parse.urlparse(redirect_url).path
|
||||
assert path.startswith(prefix)
|
||||
key = path[len(prefix) :]
|
||||
self.assertEqual(b"zulip!", bucket.Object(key).get()["Body"].read())
|
||||
|
||||
# Note: Depending on whether the calls happened in the same
|
||||
# second (resulting in the same timestamp+signature),
|
||||
# url_only_url may or may not equal redirect_url.
|
||||
# The original uri shouldn't work when logged out:
|
||||
with self.settings(DEVELOPMENT=False):
|
||||
result = self.client_get(uri)
|
||||
self.assertEqual(result.status_code, 403)
|
||||
|
||||
hamlet = self.example_user("hamlet")
|
||||
self.subscribe(hamlet, "Denmark")
|
||||
|
|
|
@ -88,42 +88,40 @@ def internal_nginx_redirect(internal_path: str) -> HttpResponse:
|
|||
return response
|
||||
|
||||
|
||||
def serve_s3(
|
||||
request: HttpRequest, url_path: str, url_only: bool, download: bool = False
|
||||
) -> HttpResponse:
|
||||
url = get_signed_upload_url(url_path, download=download)
|
||||
if url_only:
|
||||
return json_success(request, data=dict(url=url))
|
||||
def serve_s3(request: HttpRequest, path_id: str, download: bool = False) -> HttpResponse:
|
||||
url = get_signed_upload_url(path_id)
|
||||
assert url.startswith("https://")
|
||||
|
||||
return redirect(url)
|
||||
if settings.DEVELOPMENT:
|
||||
# In development, we do not have the nginx server to offload
|
||||
# the response to; serve a redirect to the short-lived S3 URL.
|
||||
# This means the content cannot be cached by the browser, but
|
||||
# this is acceptable in development.
|
||||
return redirect(url)
|
||||
|
||||
response = internal_nginx_redirect("/internal/s3/" + url[len("https://") :])
|
||||
patch_disposition_header(response, path_id, download)
|
||||
patch_cache_control(response, private=True, immutable=True)
|
||||
return response
|
||||
|
||||
|
||||
def serve_local(
|
||||
request: HttpRequest, path_id: str, url_only: bool, download: bool = False
|
||||
) -> HttpResponseBase:
|
||||
def serve_local(request: HttpRequest, path_id: str, download: bool = False) -> HttpResponseBase:
|
||||
assert settings.LOCAL_FILES_DIR is not None
|
||||
local_path = os.path.join(settings.LOCAL_FILES_DIR, path_id)
|
||||
assert_is_local_storage_path("files", local_path)
|
||||
if not os.path.isfile(local_path):
|
||||
return HttpResponseNotFound("<p>File not found</p>")
|
||||
|
||||
if url_only:
|
||||
url = generate_unauthed_file_access_url(path_id)
|
||||
return json_success(request, data=dict(url=url))
|
||||
|
||||
mimetype, encoding = guess_type(local_path)
|
||||
attachment = download or mimetype not in INLINE_MIME_TYPES
|
||||
|
||||
if settings.DEVELOPMENT:
|
||||
# In development, we do not have the nginx server to offload
|
||||
# the response to; serve it directly ourselves.
|
||||
# FileResponse handles setting Content-Disposition, etc.
|
||||
response: HttpResponseBase = FileResponse(open(local_path, "rb"), as_attachment=attachment)
|
||||
response: HttpResponseBase = FileResponse(open(local_path, "rb"), as_attachment=download)
|
||||
patch_cache_control(response, private=True, immutable=True)
|
||||
return response
|
||||
|
||||
response = internal_nginx_redirect(quote(f"/internal/uploads/{path_id}"))
|
||||
patch_disposition_header(response, local_path, attachment)
|
||||
response = internal_nginx_redirect(quote(f"/internal/local/uploads/{path_id}"))
|
||||
patch_disposition_header(response, local_path, download)
|
||||
patch_cache_control(response, private=True, immutable=True)
|
||||
return response
|
||||
|
||||
|
@ -170,25 +168,32 @@ def serve_file(
|
|||
return HttpResponseNotFound(_("<p>File not found.</p>"))
|
||||
if not is_authorized:
|
||||
return HttpResponseForbidden(_("<p>You are not authorized to view this file.</p>"))
|
||||
if url_only:
|
||||
url = generate_unauthed_file_access_url(path_id)
|
||||
return json_success(request, data=dict(url=url))
|
||||
|
||||
mimetype, encoding = guess_type(path_id)
|
||||
download = download or mimetype not in INLINE_MIME_TYPES
|
||||
|
||||
if settings.LOCAL_UPLOADS_DIR is not None:
|
||||
return serve_local(request, path_id, url_only, download=download)
|
||||
|
||||
return serve_s3(request, path_id, url_only, download=download)
|
||||
return serve_local(request, path_id, download=download)
|
||||
else:
|
||||
return serve_s3(request, path_id, download=download)
|
||||
|
||||
|
||||
LOCAL_FILE_ACCESS_TOKEN_SALT = "local_file_"
|
||||
USER_UPLOADS_ACCESS_TOKEN_SALT = "user_uploads_"
|
||||
|
||||
|
||||
def generate_unauthed_file_access_url(path_id: str) -> str:
|
||||
signed_data = TimestampSigner(salt=LOCAL_FILE_ACCESS_TOKEN_SALT).sign(path_id)
|
||||
signed_data = TimestampSigner(salt=USER_UPLOADS_ACCESS_TOKEN_SALT).sign(path_id)
|
||||
token = base64.b16encode(signed_data.encode()).decode()
|
||||
|
||||
filename = path_id.split("/")[-1]
|
||||
return reverse("local_file_unauthed", args=[token, filename])
|
||||
return reverse("file_unauthed_from_token", args=[token, filename])
|
||||
|
||||
|
||||
def get_local_file_path_id_from_token(token: str) -> Optional[str]:
|
||||
signer = TimestampSigner(salt=LOCAL_FILE_ACCESS_TOKEN_SALT)
|
||||
def get_file_path_id_from_token(token: str) -> Optional[str]:
|
||||
signer = TimestampSigner(salt=USER_UPLOADS_ACCESS_TOKEN_SALT)
|
||||
try:
|
||||
signed_data = base64.b16decode(token).decode()
|
||||
path_id = signer.unsign(signed_data, max_age=timedelta(seconds=60))
|
||||
|
@ -198,14 +203,22 @@ def get_local_file_path_id_from_token(token: str) -> Optional[str]:
|
|||
return path_id
|
||||
|
||||
|
||||
def serve_local_file_unauthed(request: HttpRequest, token: str, filename: str) -> HttpResponseBase:
|
||||
path_id = get_local_file_path_id_from_token(token)
|
||||
def serve_file_unauthed_from_token(
|
||||
request: HttpRequest, token: str, filename: str
|
||||
) -> HttpResponseBase:
|
||||
path_id = get_file_path_id_from_token(token)
|
||||
if path_id is None:
|
||||
raise JsonableError(_("Invalid token"))
|
||||
if path_id.split("/")[-1] != filename:
|
||||
raise JsonableError(_("Invalid filename"))
|
||||
|
||||
return serve_local(request, path_id, url_only=False)
|
||||
mimetype, encoding = guess_type(path_id)
|
||||
download = mimetype not in INLINE_MIME_TYPES
|
||||
|
||||
if settings.LOCAL_UPLOADS_DIR is not None:
|
||||
return serve_local(request, path_id, download=download)
|
||||
else:
|
||||
return serve_s3(request, path_id, download=download)
|
||||
|
||||
|
||||
def serve_local_avatar_unauthed(request: HttpRequest, path: str) -> HttpResponseBase:
|
||||
|
@ -232,7 +245,7 @@ def serve_local_avatar_unauthed(request: HttpRequest, path: str) -> HttpResponse
|
|||
if settings.DEVELOPMENT:
|
||||
response: HttpResponseBase = FileResponse(open(local_path, "rb"))
|
||||
else:
|
||||
response = internal_nginx_redirect(quote(f"/internal/user_avatars/{path}"))
|
||||
response = internal_nginx_redirect(quote(f"/internal/local/user_avatars/{path}"))
|
||||
|
||||
# We do _not_ mark the contents as immutable for caching purposes,
|
||||
# since the path for avatar images is hashed only by their user-id
|
||||
|
|
|
@ -165,9 +165,9 @@ from zerver.views.unsubscribe import email_unsubscribe
|
|||
from zerver.views.upload import (
|
||||
serve_file_backend,
|
||||
serve_file_download_backend,
|
||||
serve_file_unauthed_from_token,
|
||||
serve_file_url_backend,
|
||||
serve_local_avatar_unauthed,
|
||||
serve_local_file_unauthed,
|
||||
upload_file_backend,
|
||||
)
|
||||
from zerver.views.user_groups import (
|
||||
|
@ -639,8 +639,8 @@ urls += [
|
|||
urls += [
|
||||
path(
|
||||
"user_uploads/temporary/<token>/<filename>",
|
||||
serve_local_file_unauthed,
|
||||
name="local_file_unauthed",
|
||||
serve_file_unauthed_from_token,
|
||||
name="file_unauthed_from_token",
|
||||
),
|
||||
rest_path(
|
||||
"user_uploads/download/<realm_id_str>/<path:filename>",
|
||||
|
|
Loading…
Reference in New Issue