django.security.DisallowedHost is only one of a set of exceptions that
are "SuspiciousOperation" exceptions; all return a 400 to the user
when they bubble up[1]; all of them are uninteresting to Sentry.
While they may, in bulk, show a mis-configuration of some sort of the
application, such a failure should be detected via the increase in
400's, not via these, which are uninteresting individually.
While all of these are subclasses of SuspiciousOperation, we enumerate
them explicitly for a number of reasons:
- There is no one logger we can ignore that captures all of them.
Each of the errors uses its own logger, and django does not supply
a `django.security` logger that all of them feed into.
- Nor can we catch this by examining the exception object. The
SuspiciousOperation exception is raised too early in the stack for
us to catch the exception by way of middleware and check
`isinstance`. But at the Sentry level, in `add_context`, it is no
longer an exception but a log entry, and as such we have no
`isinstance` that can be applied; we only know the logger name.
- Finally, there is the semantic argument that while we have decided
to ignore this set of security warnings, we _may_ wish to log new
ones that may be added at some point in the future. It is better
to opt into those ignores than to blanket ignore all messages from
the security logger.
This moves the DisallowedHost `ignore_logger` to be adjacent to its
kin, and not on the middleware that may trigger it. Consistency is
more important than locality in this case.
Of these, the DisallowedHost logger if left as the only one that is
explicitly ignored in the LOGGING configuration in
`computed_settings.py`; it is by far the most frequent, and the least
likely to be malicious or impactful (unlike, say, RequestDataTooBig).
[1] https://docs.djangoproject.com/en/3.0/ref/exceptions/#suspiciousoperation
The OS upgrade paths which go through 2.1 do not call
`upgrade-zulip-stage-2` with `--audit-fts-indexes` because that flag
was added in 3.0.
Add an explicit step to do this audit after the 3.0 upgrade. Stating
it as another command to run, rather than attempting to tell them
to add it to the `upgrade-zulip` call that we're linking to seems
easiest, since that does not dictate if they should upgrade to a
release or from the tip of git.
We do not include a step describing this for the Trusty -> Xenial
upgrade, because the last step already chains into Xenial -> Bionic,
which itself describes auditing the indexes.
Fixes#15877.
Only Zulip 3.0 and above support the `--audit-fts-indexes` option to
`upgrade-zulip-stage-2`; saying "same as Bionic to Focal" on other
other steps, which are for Zulip 2.1 or 2.0, will result in errors.
Provide the full text of the updated `upgrade-zulip-stage-2` call in
step 5 for all non-3.0 upgrades. For Trusty to Xenial and Stretch to
Buster, we do not say "Same as Xenial to Bionic" , because it is
likely that readers do not notice that step does not read "Same as
Bionic to Focal."
These weren’t wrong since orjson.JSONDecodeError subclasses
json.JSONDecodeError which subclasses ValueError, but the more
specific ones express the intention more clearly.
(ujson raised ValueError directly, as did json in Python 2.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This adds 'user_id' to the simple success response for 'POST /users'
api endpoint, to make it convenient for API clients to get details
about users they just created. Appropriate changes have been made in
the docs and test_users.py.
Fixes#16072.
These escapes are valid YAML 1.2 (for JSON compatibility) but not
valid YAML 1.1, which means they don’t work with the faster
yaml.CSafeLoader that we’d like to transition to.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
datetime objects are not ordinarily JSON serializable. While both
ujson and orjson have special cases to serialize datetime objects,
they do it in different ways. So we want to fix the post-processing
code to do its job.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Also add helper functions needed.
`select_item_via_typeahead` has been ported from casper
and is exactly same in puppeteer to as I couldn't find
any better way for that purpose.
I had a misconception with hidden and visible options
and thought `hidden: false` was same as `visible: true`
and other way too.
But `hidden: false` or `visible: false` does nothing
more than checking if the selector exists.
Also, to mention, `visible: false`'s were fixed in
33e19fa7d1
It is more suited for `process_request`, since it should stop
execution of the request if the domain is invalid. This code was
likely added as a process_response (in ea39fb2556) because there was
already a process_response at the time (added 7e786d5426, and no
longer necessary since dce6b4a40f).
It quiets an unnecessary warning when logging in at a non-existent
realm.
This stops performing unnecessary work when we are going to throw it
away and return a 404. The edge case to this is if the request
_creates_ a realm, and is made using the URL of the new realm; this
change would prevent the request before it occurs. While this does
arise in tests, the tests do not reflect reality -- real requests to
/accounts/register/ are made via POST to the same (default) realm,
redirected there from `confirm-preregistrationuser`. The tests are
adjusted to reflect real behavior.
Tweaked by tabbott to add a block comment in HostDomainMiddleware.
This redirect was never effective -- because of the
HostDomainMiddleware, all requests to invalid domains have their
actual results thrown away, and replaced by an "Invalid realm" 404.
These lines are nonetheless _covered_ by coverage, because they do
run; the redirect is simply ineffective. This can be seen by the test
that was added with them, in c8edbae21c, actually testing the contents
for the invalid realm wording, not the "find your accounts" wording.
The exception trace only goes from where the exception was thrown up
to where the `logging.exception` call is; any context as to where
_that_ was called from is lost, unless `stack_info` is passed as well.
Having the stack is particularly useful for Sentry exceptions, which
gain the full stack trace.
Add `stack_info=True` on all `logging.exception` calls with a
non-trivial stack; we omit `wsgi.py`. Adjusts tests to match.
consume_time_seconds wasn't properly defined at the beginning, so when
a BaseException that isn't a subclass of Exception is thrown, the
finally: block could be entered with it still undefined.
By defaults, `requests` has no timeout on requests, which can lead to
waiting indefinitely. Add a half-second timeout on these; this is
applied _inside_ each retry, not overall -- that is, with retries any
of these functions may take a total of 1.5s.
Use the `no_proxy` proxy, which explicitly disables proxy usage for
particular hosts. This is a slightly cleaner solution than ignoring
all of the environment, as removing proxies is specifically what we
are attempting to accomplish.
The change in #2764 provided a better error message on one of the
three calls into Tornado, but left the other two with the old error
message. `raise_for_status` was used on two out of three.
Use a custom HTTPAdapter to apply this pattern to all requests from
Django to Tornado.