Currently, it handles two hook types: 'pre-create' (to verify that the
user is authenticated and the file size is within the limit) and
'pre-finish' (which creates an attachment row).
No secret is shared between Django and tusd for authentication of the
hooks endpoints, because none is necessary -- tusd forwards the
end-user's credentials, and the hook checks them like it would any
end-user request. An end-user gaining access to the endpoint would be
able to do no more harm than via tusd or the normal file upload API.
Regardless, the previous commit has restricted access to the endpoint
at the nginx layer.
Co-authored-by: Brijmohan Siyag <brijsiyag@gmail.com>
Zopfli[^1] performs very good, but time-intensive, zlib compression.
It is hence only suitable for pre-compressing objects, not on-the-fly
compression.
Use a webpack plugin to write pre-compressed versions of JS and CSS
assets using Zopfli, and configure nginx to serve those assets when
`Accept-Encoding: gzip` is provided.
This reduces the size of the JS and CSS assets on initial pageload
from 1422872 bytes to 1108267 bytes, or about a 22% savings.
[^1]: https://github.com/google/zopfli
The default compression level is 1; increasing this to 3 takes a small
amount more CPU time (single-digit ms on multi-MB transfers), but
results in a small but noticeable (4-7%) percentage better
compression in JSON content.
Assuming a 25 megabit connection (the current average data rate for
cell phones in the U.S.), a 2MB file which is shrunk an additional 4%
saves approximately 25 milliseconds of transfer time; thus the
additional few milliseconds of CPU-time is well worth the cost. For
faster connections (e.g. 100 megabit), the tradeoff is more or less a
wash.
There's no need for sharding, but this allows one to spend a bit of
extra memory to reduce image-processing latency when bursts of images
are uploaded at once.
A new table is created to track which path_id attachments are images,
and for those their metadata, and which thumbnails have been created.
Using path_id as the effective primary key lets us ignore if the
attachment is archived or not, saving some foreign key messes.
A new worker is added to observe events when rows are added to this
table, and to generate and store thumbnails for those images in
differing sizes and formats.
Some S3 backends (e.g. garage or minio behind caddy) are unable to
respond to TLS requests that only have the Host header set. This makes
sure those configurations are supported going forward.
Clients making requests to Zulip with a `Authorization: Basic ...` for
an upload in S3 pass along all of their request headers to the S3
backend -- causing errors of the form:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Only one auth mechanism allowed; only the X-Amz-Algorithm
query parameter, Signature query string parameter or the
Authorization header should be specified</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>Basic ...</ArgumentValue>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
```
Strip off all request headers which AWS reports that S3 may read[^1].
Fixes: #30180.
[^1]: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html
This adds `--automated` and `--no-automated` flags to all Zulip
management commands, whose default is based on if STDIN is a TTY.
This enables cron jobs and supervisor commands to continue to report
to Sentry, and manually-run commands (when reporting to Sentry does
not provide value, since the user can see them) to not.
Note that this only applies to Zulip commands -- core Django
commands (e.g. `./manage.py`) do not grow support for `--automated`
and will always report exceptions to Sentry.
`manage.py` subcommands in the `upgrade` and `restart-server` paths
are marked as `--automated`, since those may be run semi-unattended,
and they are useful to log to Sentry.
This saves us the time of shelling out to a new python process,
loading all of Django, and printing one value we could just have read
in-process. It is unclear why we ever did it this way.