Tweaked by tabbott to add a test and fix a super subtle issue with the
relative_settings_link variable having been set once the first time a
/help article was rendered.
A common path is a new user goes to realm_uri, which redirects to
realm_uri/login, and clicks the google auth button thinking it is a
registration button.
This commit just changes the wording on the page they land on to be
friendlier for that use case.
This piece of code was used when we used Django template engine. Since
we moved to Jinja2 template engine, we wrote a newer version of this
function (minified_js) in 'zproject/jinja2/compressors.py' which is
now used in our templates. This newer function essentially retired the
old function defination and thus the old code became dead. We probably
missed out this clean up at the time we migrated to Jinja2 template
engine.
This should make it easier to find the templates that are actually
part of the core webapp, instead of having them all mixed together
with the portico pages.
This completes the effort to ensure that all of our webhooks that do
parsing of the third-party message format log something that we can
use to debug cases where we're not parsing the payloads correctly.
The main change here is to send a proper confirmation link to the
frontend in the `confirm_continue_registration` code path even if the
user didn't request signup, so that we don't need to re-authenticate
the user's control over their email address in that flow.
This also lets us delete some now-unnecessary code: The
`invalid_email` case is now handled by HomepageForm.is_valid(), which
has nice error handling, so we no longer need logic in the context
computation or template for `confirm_continue_registration` for the
corner case where the user somehow has an invalid email address
authenticated.
We split one GitHub auth backend test to now cover both corner cases
(invalid email for realm, and valid email for realm), and rewrite the
Google auth test for this code path as well.
Fixes#5895.
This test class is basically a poor version of the end-to-end tests
that we have in `test_auth_backends.py`, and didn't really add any
value other than making it difficult to refactor.
By moving all of the logic related to the is_signup flag into
maybe_send_to_registration, we make the login_or_register_remote_user
function quite clean and readable.
The next step is to make maybe_send_to_registration less of a
disaster.
The code in maybe_send_to_registration incorrectly used the
`get_realm_from_request` function to fetch the subdomain. This usage
was incorrect in a way that should have been irrelevant, because that
function only differs if there's a logged-in user, and in this code
path, a user is never logged in (it's the code path for logged-out
users trying to sign up).
This this bug could confuse unit tests that might run with a logged-in
client session. This made it possible for several of our GitHub auth
tests to have a totally invalid subdomain value (the root domain).
Fixing that bug in the tests, in turn, let us delete a code path in
the GitHub auth backend logic in `backends.py` that is impossible in
production, and had just been left around for these broken tests.
This code path has actually been dead for a while (since
`invalid_subdomain` gets set to True only when `user_profile` is
`None`). We might want to re-introduce it later, but for now, we
eliminate it and the artificial test that provided it with test
coverage.
This is done mainly because this backend has the simplest code path
for calling login_or_register_remote_user, more than because we expect
this case to come up. It'll make it easier to write unit tests for
the `invalid_subdomain` corner case.
This is a simple computed field. It's intended to more clearly
capture the meaning of this restriction for the users in zephyr mirror
realms, and eventually support guest user accounts in normal Zulip
realms.
This is part of the effort to remove the use of is_zephyr_mirror_realm
across the code path for situations that might be relevant for other
users. It helps keep the code readable.
When you're importing with --destroy-rebuild-database, we need to
check subdomain availability after we've cleared out the database;
otherwise, trying to reuse the same subdomain doesn't work.
This commit sends the event for renaming of a private stream to
organization admins of the realm, in addition to the obvious list of
subscribers of the private stream.
Normally, admins can manage a private stream (e.g. unsubscribing a
user). But when the admin tried to unsubscribes a user from a
previously renamed stream, we previously were throwing a JS error, as
the webapp hadn't been notified about the new stream name.
Fixes#9034.
In this commit:
Two new URLs are added, to make all realms accessible for server
admins. One is for the stats page itself and another for getting
chart data i.e. chart data API requests.
For the above two new URLs corresponding two view functions are
added.
This was stored as a fixture file under zerver/fixtures, which caused
problems, since we don't show that directory under production (as its
part of the test system).
The simplest emergency fix here would be to just move the file, but
when looking at it, it's clear that we don't need or want a fixture
file here; we want a Python object, so we just do that.
A valuable follow-up improvement to this block would be to create an
actual new Realm object (not saved to the database), and dump it the
same code we use in the export tool; that should handle the vast
majority of these correctly.
Fixes#9123.
This logging was apparently broken when sorting imports; it's a fairly
unique thing in our codebase that this would be a problem. Prevent
future regressions by adding this exception explicitly to the isort
configuration.
We let Markdown increment the list step numbers, which is more
reliable than keeping track of numbered-steps manually.
Also, instead of linking to the CircleCI docs, we now have full
instructions for how to setup a webhook by modifying the circle.yml
file.
Ancient GitLab from several years ago doesn't include the
HTTP_X_GITLAB_EVENT header (and seems to have a different format), so
we should ignore its requests.
Might be good to document the version threshhold, but it's very hard
to tell from Googling what it is.
The size information of an avatar is not required during the import.
Check function 'import_uploads_local' and 'import_uploads_s3'
in 'export.py' for this.
We should still short-circuit the iteration in
`add_missing_messages` if the unsubscription was the last
thing to happen to the user before unsubscription and
soft deactivation.
Rishi and I decided that it makes sense to get rid of the Facebook
integration for a few reasons, some of which are:
* The setup process is too complicated on Facebook's end. The users
will surely have to browse Facebook's huge API reference before even
having a vague idea of what they want.
* Slack chooses not to have a Facebook integration, but relies on
Zapier for it. Zaps that integrate with Facebook are much more
streamlined and the setup process isn't as much of a pain. Zapier's
Facebook Zaps are much more fine-tuned and there are different Zaps
for different parts of the FB API, a luxury that would likely span
2K+ lines of code on our end if we were to implement it from
scratch. So, I think we should relegate integration with Facebook to
Zapier as well!
* After thoroughly testing the setup process, we concluded that the
person who submitted the FB integration didn't really test it
thoroughly because there were some gaping holes in the docs (missing
steps, user permissions, etc.).
This extends the /user_uploads API endpoint to support passing the
authentication credentials via the URL, not the HTTP_AUTHORIZATION
headers. This is an important workaround for the fact that React
Native's Webview system doesn't support setting HTTP_AUTHORIZATION;
the app will be responsible for rewriting URLs for uploaded files
directly to add this parameter.
This commit increases the rendered_content limit from 2x to 10x of the
original message length.
Earlier, we had placed a limit of MAX_MESSAGE_LENGTH * 2 for the
rendered content (explained in commit
77addc5456). That limit was based on
the assumption that in most cases, the rendered content wouldn't cause
a large increase in message length. However, quite prominently in
syntax highlighted codeblocks, that wasn't true and this caused the
limit condition to be hit for long messages composed primarily of code
blocks.
Example: The following message would render close to 10x it's original size.
```py
if:
def:
print("x", var)
x = y
```
Because the syntax highlighted logic is extremely compressible, having
rendered_content reach up to 100KB doesn't create a network
performance problem.
This fixes a set of XSS issues with Zulip's frontend markdown
processor, which is used in a limited set of contexts, such as local
echo of messages and the drafts feature.
The implementation of several syntax elements, including the <em>
syntax, user and stream mentions, and some others failed to properly
escape the content inside the syntax.
Fix this, and add tests for each corrected code path.
Thanks to w2w for reporting this issue.
This is a mobile-specific endpoint used for logging into a dev server.
On mobile without this realm_uri it's impossible to send a login request
to the corresponding realm on the dev server and proceed further; we can
only guess, which doesn't work for using multiple realms.
Also rename the endpoint to reflect the additional data.
Testing Plan:
Sent a request to the endpoint, and inspected the result.
[greg: renamed function to match, squashed renames with data change,
and adjusted commit message.]
Deletion of medium sized image is done if it exists before calling the
function 'ensure_medium_avatar_image', to avoid potentially confusing
problems with left-over medium-size avatar images from a previous run
being used when repeatedly importing the same realm in a development
environment..
Fixes#8949.
This one is one of the most tedious to set up and get working.
We now also rely on the Trello scripts available as part of the
`python-zulip-api/zulip` API package to make the setup process
easier.
After some thinking, I don't think there's any actual value to doing
the ../ style relative links here, whereas there is actual harm from
the links being slightly broken in the current model. We fix this by
just using /#settings as the URL.
Fixes#8978.
For certain queries where both include_history and
use_first_unread_anchor are set to True, we were excluding
historical rows. Now we only use the use_first_unread_anchor
flag to filter rows that we use to find the anchor, without
having it filter the actual search results.
The bug went unreported for a long time, because it only
affected mobile users who had newly subscribed to streams.
Note that we make a small change to the test called
test_use_first_unread_anchor_with_muted_topics, which has
a very scary comment about being "arcane" and "be
absolutely sure you know what you're doing." I think it's
fine.
Also, the new test code would fail before this fix, so it
should help prevent future regressions.
Fixes#8958
This is a bit more than a pure refactor, because we duplicate a
chunk of code to calculate a query inside of
find_first_unread_anchor(), so we're doing a bit more work
than before.
We need this refactoring to start decoupling find_first_unread_anchor
from get_messages_backend for the case where include_history is
True. This will happen in a subsequent commit.
The only test that changes here is a direct test on
find_first_unread_anchor(). All other tests pass without
modification, and we have decent coverage on get_messages_backend.
We use an array now to build up the list of search operands and
then consolidate the special search handling after the loop (which
means setting the flag, putting two more columns in the query, and
using ' '.join to build the string).
We have a debugging statement for some obscure errors we get
when narrows have search terms. We now show all the narrow
operators. This isn't really to improve debugging; it's more
to make it easier in the next commit to extract a function
that would make search_term have to be passed back in a tuple.
But it shouldn't hurt debugging either.
Refactoring in this file had resulted in the logic for
html_settings_link being duplicated and extra logic being needed to
ensure these variables were set where they were needed.
This fixes subscriptions_html not being rendered properly in the /help
and /api pages, in addition to removing duplicate code.
This is a pure refactoring and just pulls the function out
to the top level of the module. (The prior commit extracted
it inside a larger function to make a nicer diff.)
Remove models.get_user_profiles_by_ids() and
obtain user's bots profiles in actions.get_service_dicts_for_bots() by
users.user_ids_to_users() instead of models.get_user_profiles_by_ids().
Fixes#8939
We don't indend for this server-level setting to exist in the long
term; the purpose of this is just to make it easy to test this code
path for development purposes.
This implements much of the Message side part of #2745.
The new name can_access_stream_history_by_name gets to the point of
what this function actually does. And passing in a user object lets
us define what this does based on the user subscribed.
The comments explain this pretty well, but basically because we
rewrite the realm ID during the import process, we need to edit all
the message bodies that link to an attachment to instead link to the
post-processed URL where that file will be hosted on the new server.
Fixes#8926.
Implement few optimizations for reading admin's bot dicts from database
for a constants number of requests:
- add models.get_user_profiles_by_ids() for reading bots profiles
by single query from database
- add models.get_services_for_bots() for reading services for bots
by single query from database
- add bot_config.get_bot_configs() for reading config data for bots
by single query from database
Fixes#8838
This fixes a bug where the endpoint for editing bot users would allow
an organization administrator to edit the full name of a bot user.
A combination of this an another recently fixed bug made it possible
for this process to set a `bot_owner` for a non-bot user; so we also
include a migration to fix that for any users that might have had our
model invariants corrupted in that way.
This was a user-reported bug and a very subtle and painful one
to track down.
Previously, if payload['push']['changes'][i]['closed'] was True,
we assumed that a branch was removed. Looking at whether `closed`
was set to True or not was our way to tell whether a push removed
a branch or not.
However, this is wrong! `closed` being set to True can also mean
that the pull request associated with the branch was approved but
the branch itself was not deleted. According to the BitBucket docs,
the correct way to see if a branch is deleted is to check if `new`
is null.
This bug was leading to KeyErrors about not being able to find
the `commits` key, which shouldn't happen anymore!