Similar to the previous commit, we should access request.user only
after it has been initialized, rather than having awkward hasattr
checks.
With updates to the settings comments about LogRequests by tabbott.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
`request.user` gets set in Django's `AuthenticationMiddleware`, which
runs after our `HostDomainMiddleware`.
This makes `hasattr` checks necessary in any code path that uses the
`request.user` attribute. In this case, there are functions in
`context_processors` that get called in the middleware.
Since neither `CsrfMiddleware` nor `HostDomainMiddleware` are required
to run before `AuthenticationMiddleware`, moving it two slots up in
`computed_settings` is sufficient to avoid the `hasattr` checks.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
In zliencer.management.commands.populate_db, we assign the value of
settings.CACHES["default"] to `default_cache`.
django-stubs infers `settings.CACHES` to be `Dict[str, object]`. We make
the type specific enough so that we can access `default_cache` as a
dict.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This was added in 1fded25025, and is not
necessary for standard Zulip installs. While both Host: and
X-Forwarded-Host: are nominally untrusted, there is no reason to
complicate the deployment by defaulting it on.
We previously forked tornado.autoreload to work around a problem where
it would crash if you introduce a syntax error and not recover if you
fix it (https://github.com/tornadoweb/tornado/issues/2398).
A much more maintainable workaround for that issue, at least in
current Tornado, is to use tornado.autoreload as the main module.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit changes the invite API to accept invitation
expiration time in minutes since we are going to add a
custom option in further commits which would allow a user
to set expiration time in minutes, hours and weeks as well.
This cache was added in da33b72848 to serve as a replacement for the
durable database cache, in development; the previous commit has
switched that to be the non-durable memcached backend.
The special-case for "in-memory" in development is mostly-unnecessary
in contrast to memcached -- `./tools/run-dev.py` flushes memcached on
every startup. This differs in behaviour slightly, in that if the
codepath is changed and `run-dev` restarts Django, the cache is not
cleared. This seems an unlikely occurrence, however, and the code
cleanup from its removal is worth it.
Failure to pull the default "zulip" value here can lead to
accidentally applying a `postgres_password` value which is unnecessary
and may never work.
For consistency, always skip password auth attempts for the "zulip"
user on localhost, even if the password is set. This mirrors the
behavior of `process_fts_updates`.
It’s built in to Jinja2 as of 2.9. Fixes “DeprecationWarning: The
'autoescape' extension is deprecated and will be removed in Jinja
3.1. This is built in now.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We recently ran into a payload in production that didn't contain
an event type at all. A payload where we can't figure out the event
type is quite rare. Instead of letting these payloads run amok, we
should raise a more informative exception for such unusual payloads.
If we encounter too many of these, then we can choose to conduct a
deeper investigation on a case-by-case basis.
With some changes by Tim Abbott.
Having wantMessagesSigned=True globally means that it's also applied by
python3-saml to regular authentication SAMLResponses - making it require
the response to be signed, which is an issue because a feasible
alternative way that some IdPs (e.g. AzureAD) take by default is to sign
specifically the assertions in the SAMLResponse. This is also secure,
and thus we generally want to accept it.
Without this, the setting of wantMessagesSigned=True globally
in 4105ccdb17 causes a
regression for deployments that have already set up SAML with providers
such as AzureAD, making Zulip stop accepting the SAMLResponses.
Testing that this new logic works is handled by
test_saml_idp_initiated_logout_invalid_signature, which verifies that a
LogoutRequest without signature will be rejected.
The warning was fixed in python-jose 3.3.0, which we pulled in with
commit 61e1e38a00 (#18705).
This reverts commit 1df725e6f1 (#18567).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
This more closely matches email_change_by_user and
password_reset_form_by_email limits; legitimate users are unlikely to
need to send more than 5 emails to themselves during a day.
Both `create_realm_by_ip` and `find_account_by_ip` send emails to
arbitrary email addresses, and as such can be used to spam users.
Lump their IP rate limits into the same bucket; most legitimate users
will likely not be using both of these endpoints at similar times.
The rate is set at 5 in 30 minutes, the more quickly-restrictive of
the two previous rates.
SOCIAL_AUTH_SUBDOMAIN was potentially very confusing when opened by a
user, as it had various Login/Signup buttons as if there was a realm on
it. Instead, we want to display a more informative page to the user
telling them they shouldn't even be there. If possible, we just redirect
them to the realm they most likely came from.
To make this possible, we have to exclude the subdomain from
ROOT_SUBDOMAIN_ALIASES - so that we can give it special behavior.
This fixes error found with django-stubs and it is a part of #18777.
Note that there are various remaining errors that need to be fixed in
upstream or elsewhere in our codebase.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.
This allows for greater flexibility in values for "environment," and
avoids having to have duplicate definitions of STAGING in
`zproject/config.py` and `zproject/default_settings.py` (due to import
order restrictions).
It does overload the "deploy type" concept somewhat.
Follow-up to #19185.
If the user is logged in, we'll stick to rate limiting by the
UserProfile. In case of requests without authentication, we'll apply the
same limits but to the IP address.
Rename poll_timeout to event_queue_longpoll_timeout_seconds
and change its value from 90000 ms to 90 sec. Expose its
value in register api response when realm data is fetched.
Bump API_FEATURE_LEVEL to 74.
This adds basic support for `postgresql.database_user` and
`postgresql.database_name` settings in `zulip.conf`; the defaults if
unspecified are left as `zulip`.
Co-authored-by: Adam Birds <adam.birds@adbwebdesigns.co.uk>
This makes it parallel with deliver_scheduled_messages, and clarifies
that it is not used for simply sending outgoing emails (e.g. the
`email_senders` queue).
This also renames the supervisor job to match.
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is a straightforward upgrade in terms of changes needed.
Necessary changes were:
- Set `DEFAULT_AUTO_FIELD`
https://docs.djangoproject.com/en/3.2/releases/3.2/#customizing-type-of-auto-created-primary-keys
- `The default_app_config application configuration variable is deprecated, due
to the now automatic AppConfig discovery.`
https://docs.djangoproject.com/en/3.2/releases/3.2/#automatic-appconfig-discovery
To handle this one, we can remove default_app_config from
zerver/__init__.py because it satisfies what release notes describe in
https://docs.djangoproject.com/en/3.2/releases/3.2/#automatic-appconfig-discovery:
"Most pluggable applications define an AppConfig subclass in an apps.py
submodule. Many define a default_app_config variable pointing to this
class in their __init__.py. When the apps.py submodule exists and
defines a single AppConfig subclass, Django now uses that configuration
automatically, so you can remove default_app_config."
An important note is that rebuild-test-database needs to be run after
this upgrade in dev environment - if tests are run with test db that was
built on the previous version, they will fail due to a mysterious bug
(?), where changing attributes of a user and .save()ing after logging in
in the test via self.login_user, causes getting logged out - the next
requests via self.client_get etc. are unauthed for some reason,
unless self.login_user is called again. This behavior is no longer
exhibited upon rebuilding the test db - and I can't reproduce it in
production or dev db. So this can likely be reasonably dismissed as some
quirk of the test client system that won't be relevant in the future and
doesn't impact production.
It does not seem like an official version supporting Webpack 4 (to say
nothing of 5) will be released any time soon, and we can reimplement
it in very little code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The `X-Forwarded-For` header is a list of proxies' IP addresses; each
proxy appends the remote address of the host it received its request
from to the list, as it passes the request down. A naïve parsing, as
SetRemoteAddrFromForwardedFor did, would thus interpret the first
address in the list as the client's IP.
However, clients can pass in arbitrary `X-Forwarded-For` headers,
which would allow them to spoof their IP address. `nginx`'s behavior
is to treat the addresses as untrusted unless they match an allowlist
of known proxies. By setting `real_ip_recursive on`, it also allows
this behavior to be applied repeatedly, moving from right to left down
the `X-Forwarded-For` list, stopping at the right-most that is
untrusted.
Rather than re-implement this logic in Django, pass the first
untrusted value that `nginx` computer down into Django via `X-Real-Ip`
header. This allows consistent IP addresses in logs between `nginx`
and Django.
Proxied calls into Tornado (which don't use UWSGI) already passed this
header, as Tornado logging respects it.
This adds the is_user_active with the appropriate code for setting the
value correctly in the future. In the following commit a migration to
backfill the value for existing Subscriptions will be added.
To ensure correct user_profile.is_active handling also in tests, we
replace all direct .is_active mutation with calls to appropriate
functions.
https://docs.djangoproject.com/en/3.1/releases/3.1/
- django.contrib.postgres.fields.JSONField is deprecated and should be
replaced with models.JSONField
- The internals of the implementation in the postgresql backend have
changed a bit in
f48f671223
and thus we need to make an ugly tweak in test_runner.
- app_directories.Loader.get_dirs() now returns a list of PosixPath so
we need to make a small tweak in TwoFactorLoader for that (PosixPath
is not iterable)
Fixes#16010.
Boto3 does not allow setting the endpoint url from
the config file. Thus we create a django setting
variable (`S3_ENDPOINT_URL`) which is passed to
service clients and resources of `boto3.Session`.
We also update the uploads-backend documentation
and remove the config environment variable as now
AWS supports the SIGv4 signature format by default.
And the region name is passed as a parameter instead
of creating a config file for just this value.
Fixes#16246.
Having both of these is confusing; TORNADO_SERVER is used only when
there is one TORNADO_PORT. Its primary use is actually to be _unset_,
and signal that in-process handling is to be done.
Rename to USING_TORNADO, to parallel the existing USING_RABBITMQ, and
switch the places that used it for its contents to using
TORNADO_PORTS.
We can compute the intended number of processes from the sharding
configuration. In doing so, also validate that all of the ports are
contiguous.
This removes a discrepancy between `scripts/lib/sharding.py` and other
parts of the codebase about if merely having a `[tornado_sharding]`
section is sufficient to enable sharding. Having behaviour which
changes merely based on if an empty section exists is surprising.
This does require that a (presumably empty) `9800` configuration line
exist, but making that default explicit is useful.
After this commit, configuring sharding can be done by adding to
`zulip.conf`:
```
[tornado_sharding]
9800 = # default
9801 = other_realm
```
Followed by running `./scripts/refresh-sharding-and-restart`.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
Calling `render()` in a middleware before LocaleMiddleware has run
will pick up the most-recently-set locale. This may be from the
_previous_ request, since the current language is thread-local. This
results in the "Organization does not exist" page occasionally being
in not-English, depending on the preferences of the request which that
thread just finished serving.
Move HostDomainMiddleware below LocaleMiddleware; none of the earlier
middlewares call `render()`, so are safe. This will also allow the
"Organization does not exist" page to be localized based on the user's
browser preferences.
Unfortunately, it also means that the default LocaleMiddleware catches
the 404 from the HostDomainMiddlware and helpfully tries to check if
the failure is because the URL lacks a language component (e.g.
`/en/`) by turning it into a 304 to that new URL. We must subclass
the default LocaleMiddleware to remove this unwanted functionality.
Doing so exposes a two places in tests that relied (directly or
indirectly) upon the redirection: '/confirmation_key'
was redirected to '/en/confirmation_key', since the non-i18n version
did not exist; and requests to `/stats/realm/not_existing_realm/`
incorrectly were expecting a 302, not a 404.
This regression likely came in during f00ff1ef62, since prior to
that, the HostDomainMiddleware ran _after_ the rest of the request had
completed.
Its functionality was added to Django upstream in 2.1. Also remove
the SESSION_COOKIE_SAMESITE = 'Lax' setting since it’s the default.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This undoes a small part of b8a2e6b5f8; namely, logs to
`zulip.zerver.webhooks`, which are all exceptions from webhooks except
UnsupportedWebhookEventType, should still be logged to the main error
loggers. This maintains the property that exceptions generating 500's
are all present in `errors.log`.
This clears it out of the data sent to Sentry, where it is duplicative
with the indexed metadata -- and potentially exposes PHI if Sentry's
"make this issue public" feature is used.
`zulip.zerver.lib.webhooks.common` was very opaque previously,
especially since none of the logging was actually done from that
module.
Adjust to a more explicit logger name.
Any exception is an "unexpected event", which means talking about
having an "unexpected event logger" or "unexpected event exception" is
confusing. As the error message in `exceptions.py` already explains,
this is about an _unsupported_ event type.
This also switches the path that these exceptions are written to,
accordingly.
This commit adds automatic detection of extra output (other than
printed by testing library or tools) in stderr and stdout by code under
test test-backend when it is run with flag --ban-console-output.
It also prints the test that produced the extra console output.
Fixes: #1587.
The apple developer webapp consistently refers this App ID. So,
this clears any confusion that can occur.
Since python social auth only requires us to include App ID in
_AUDIENCE(a list), we do that in computed settings making it easier for
server admin and we make it much clear by having it set to
APP_ID instead of BUNDLE_ID.
Uses git release as this version 3.4.0 is not released to pypi.
This is required for removing some overriden functions of
apple auth backend class AppleAuthBackend.
With the update we also make following changes:
* Fix full name being populated as "None None".
c5c74f27dd that's included in update assigns first_name and last_name
to None when no name is provided by apple. Due to this our
code is filling return_data['full_name'] to 'None None'.
This commit fixes it by making first and last name strings empty.
* Remove decode_id_token override.
Python social auth merged the PR we sent including the changes
we made to decode_id_token function. So, now there is no
necessity for the override.
* Add _AUDIENCE setting in computed_settings.py.
`decode_id_token` is dependent on this setting.
This lets us test the recursion bug behavior of this logging handler
without resulting in `logging.error` output being printed to the
console in the event that the test passes.
Use the default configuration, which catches Error logging and
exceptions. This is placed in `computed_settings.py` to match the
suggested configuration from Sentry[1], which places it in `settings.py`
to ensure it is consistently loaded early enough.
It is placed behind a check for SENTRY_DSN soas to not incur the
additional overhead of importing the `sentry_sdk` modules if Sentry is
not configured.
[1] https://docs.sentry.io/platforms/python/django/
We use the EMAIL_TIMEOUT django setting to timeout after 15s of trying
to send an email. This will nicely lead to retries in the email_senders
queue, due to the retry_send_email_failures decorator.
smtlib documentation suggests that socket.timeout can be raised as the
result of timing out, so in attempts I'm getting
smtplib.SMTPServerDisconnected. Either way, seems appropriate to add
socket.timeout to the exception that we catch.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>