BACKEND_DATABASE_TEMPLATE was introduced in a507a47778.
This setting is only available for the test cases and it is not that
necessary to have it configurable.
We define it as a global variable in zerver.lib.test_fixtures.
This avoids requiring mypy_django_plugin to know the type of
settings.BACKEND_DATABASE_TEMPLATE for type checking purposes, given the fact
that settings.test_extra_settings is not available in production/development
setup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
The presence of `auto_signup` in idp_settings_dict in the test case
test_social_auth_registration_auto_signup is incompatible with the
previous type annotation of SOCIAL_AUTH_OIDC_ENABLED_IDPS, where `bool`
is not allowed.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
We previously forked tornado.autoreload to work around a problem where
it would crash if you introduce a syntax error and not recover if you
fix it (https://github.com/tornadoweb/tornado/issues/2398).
A much more maintainable workaround for that issue, at least in
current Tornado, is to use tornado.autoreload as the main module.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This comment was _originally_ for the `default` memcached cache, back
when it was added all of the way back in 0a84d7ac62. 9e64750083
made it a lie, and edc718951c made it even more confusing when it
removed the `default` cache configuration block, leaving the wrong
comment next to the wrong cache configuration block.
Banish the comment.
Both `create_realm_by_ip` and `find_account_by_ip` send emails to
arbitrary email addresses, and as such can be used to spam users.
Lump their IP rate limits into the same bucket; most legitimate users
will likely not be using both of these endpoints at similar times.
The rate is set at 5 in 30 minutes, the more quickly-restrictive of
the two previous rates.
These details are useful to log. This only makes sense for some auth
backends, namely email and ldap backends, because other backends are
"external" in the sense that they happen at some external provider's
server (Google, SAML IdP etc.) so the failure also happens there and we
don't get useful information about what happened.
This fixes error found with django-stubs and it is a part of #18777.
Note that there are various remaining errors that need to be fixed in
upstream or elsewhere in our codebase.
Closes#19287
This endpoint allows submitting multiple addresses so we need to "weigh"
the rate limit more heavily the more emails are submitted. Clearly e.g.
a request triggering emails to 2 addresses should weigh twice as much as
a request doing that for just 1 address.
If the user is logged in, we'll stick to rate limiting by the
UserProfile. In case of requests without authentication, we'll apply the
same limits but to the IP address.
The old type in default_settings wasn't right - limit_to_subdomains is a
List[str]. We define a TypeDict for capturing the typing of the settings
dict more correctly and to allow future addition of configurable
attributes of other non-str types.
Rename poll_timeout to event_queue_longpoll_timeout_seconds
and change its value from 90000 ms to 90 sec. Expose its
value in register api response when realm data is fetched.
Bump API_FEATURE_LEVEL to 74.
This will be useful for deployments that want to just use the full name
provided by the IdP and thus skip the registration form. Also in
combination with disabling name changes in the organization, can force
users to just use that name without being able to change it.
Thumbor and tc-aws have been dragging their feet on Python 3 support
for years, and even the alphas and unofficial forks we’ve been running
don’t seem to be maintained anymore. Depending on these projects is
no longer viable for us.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
It does not seem like an official version supporting Webpack 4 (to say
nothing of 5) will be released any time soon, and we can reimplement
it in very little code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Having both of these is confusing; TORNADO_SERVER is used only when
there is one TORNADO_PORT. Its primary use is actually to be _unset_,
and signal that in-process handling is to be done.
Rename to USING_TORNADO, to parallel the existing USING_RABBITMQ, and
switch the places that used it for its contents to using
TORNADO_PORTS.
This commit adds automatic detection of extra output (other than
printed by testing library or tools) in stderr and stdout by code under
test test-backend when it is run with flag --ban-console-output.
It also prints the test that produced the extra console output.
Fixes: #1587.
The apple developer webapp consistently refers this App ID. So,
this clears any confusion that can occur.
Since python social auth only requires us to include App ID in
_AUDIENCE(a list), we do that in computed settings making it easier for
server admin and we make it much clear by having it set to
APP_ID instead of BUNDLE_ID.
Uses git release as this version 3.4.0 is not released to pypi.
This is required for removing some overriden functions of
apple auth backend class AppleAuthBackend.
With the update we also make following changes:
* Fix full name being populated as "None None".
c5c74f27dd that's included in update assigns first_name and last_name
to None when no name is provided by apple. Due to this our
code is filling return_data['full_name'] to 'None None'.
This commit fixes it by making first and last name strings empty.
* Remove decode_id_token override.
Python social auth merged the PR we sent including the changes
we made to decode_id_token function. So, now there is no
necessity for the override.
* Add _AUDIENCE setting in computed_settings.py.
`decode_id_token` is dependent on this setting.
This lets us test the recursion bug behavior of this logging handler
without resulting in `logging.error` output being printed to the
console in the event that the test passes.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>