Tweaked by tabbott to add a test and fix a super subtle issue with the
relative_settings_link variable having been set once the first time a
/help article was rendered.
A common path is a new user goes to realm_uri, which redirects to
realm_uri/login, and clicks the google auth button thinking it is a
registration button.
This commit just changes the wording on the page they land on to be
friendlier for that use case.
This piece of code was used when we used Django template engine. Since
we moved to Jinja2 template engine, we wrote a newer version of this
function (minified_js) in 'zproject/jinja2/compressors.py' which is
now used in our templates. This newer function essentially retired the
old function defination and thus the old code became dead. We probably
missed out this clean up at the time we migrated to Jinja2 template
engine.
This should make it easier to find the templates that are actually
part of the core webapp, instead of having them all mixed together
with the portico pages.
This completes the effort to ensure that all of our webhooks that do
parsing of the third-party message format log something that we can
use to debug cases where we're not parsing the payloads correctly.
The main change here is to send a proper confirmation link to the
frontend in the `confirm_continue_registration` code path even if the
user didn't request signup, so that we don't need to re-authenticate
the user's control over their email address in that flow.
This also lets us delete some now-unnecessary code: The
`invalid_email` case is now handled by HomepageForm.is_valid(), which
has nice error handling, so we no longer need logic in the context
computation or template for `confirm_continue_registration` for the
corner case where the user somehow has an invalid email address
authenticated.
We split one GitHub auth backend test to now cover both corner cases
(invalid email for realm, and valid email for realm), and rewrite the
Google auth test for this code path as well.
Fixes#5895.
This test class is basically a poor version of the end-to-end tests
that we have in `test_auth_backends.py`, and didn't really add any
value other than making it difficult to refactor.
By moving all of the logic related to the is_signup flag into
maybe_send_to_registration, we make the login_or_register_remote_user
function quite clean and readable.
The next step is to make maybe_send_to_registration less of a
disaster.
The code in maybe_send_to_registration incorrectly used the
`get_realm_from_request` function to fetch the subdomain. This usage
was incorrect in a way that should have been irrelevant, because that
function only differs if there's a logged-in user, and in this code
path, a user is never logged in (it's the code path for logged-out
users trying to sign up).
This this bug could confuse unit tests that might run with a logged-in
client session. This made it possible for several of our GitHub auth
tests to have a totally invalid subdomain value (the root domain).
Fixing that bug in the tests, in turn, let us delete a code path in
the GitHub auth backend logic in `backends.py` that is impossible in
production, and had just been left around for these broken tests.
This code path has actually been dead for a while (since
`invalid_subdomain` gets set to True only when `user_profile` is
`None`). We might want to re-introduce it later, but for now, we
eliminate it and the artificial test that provided it with test
coverage.
This is done mainly because this backend has the simplest code path
for calling login_or_register_remote_user, more than because we expect
this case to come up. It'll make it easier to write unit tests for
the `invalid_subdomain` corner case.
This is a simple computed field. It's intended to more clearly
capture the meaning of this restriction for the users in zephyr mirror
realms, and eventually support guest user accounts in normal Zulip
realms.
This is part of the effort to remove the use of is_zephyr_mirror_realm
across the code path for situations that might be relevant for other
users. It helps keep the code readable.
When you're importing with --destroy-rebuild-database, we need to
check subdomain availability after we've cleared out the database;
otherwise, trying to reuse the same subdomain doesn't work.