The fixture changes are because self.upgrade formerly used to cause a page load
of /billing, which in turn calls Customer.retrieve.
If we ran the full test suite with GENERATE_STRIPE_FIXTURES=True, we would
likely see several more Customer.retrieve.N.json's being deleted. But
keeping them there for now to keep the diff small.
This commit works by vendoring the couple functions we still use from
puppetlabs stdlib (join and range), but removing the rest of the
puppetlabs codebase, and of course cleaning up our linter rules in the
process.
Fixes#7423.
This optimizes tools/provision by not running
`tools/update-authors-json --use-fixture` unless either the script
itself or its fixtures file (zerver/tests/fixtures/authors.json) was
changed.
Fixes#10991.
This release is from 2018-08-22, a little over 100 days ago.
It was the first release with the important fix so that when the
server advises it to stop displaying a notification because the user
has read the message (as the SEND_REMOVE_PUSH_NOTIFICATIONS server
setting enables), the app doesn't instead replace the notification
with a broken one reading "null". We have that setting running now
on chat.zulip.org, and intend to roll it out more broadly soon.
The `# take 0` thing is a slightly absurd workaround for the fact
that our funky out-of-line way of marking lines to ignore doesn't
work right if there are multiple such lines in a given file that
are equal modulo leading and trailing whitespace.
This option causes test-documentation to only verify internal links
that we control and are important to be correct. This prevents
test-documentation from flaking in CI due to issues with the dozens of
third-party blocks that we link to from various parts of our
documentation.
Tweaked by tabbott for comment clarity, and to also include
github.com/zulip links.
Fixes#10942.
The goal here was to enforce 100% coverage on
parse_narrow, but the code has an unreachable line
and is overly tolerant of bogus urls. This will
be fixed in the next commit.
This fixes an actual user-facing issue in our mobile push
notifications documentation (where we were incorrectly failing to
quote the argument to `./manage.py register_server` making it not
work), as well as preventing future similar issues from occurring
again via a linter rule.
We now let color_data keep its own state for
unused_colors, so that we longer have to pass in
a large list of unused_colors every time we want
to assign a new stream color.
This mostly matters at startup, where we might
be cycling through 5000 streams. We claim all
the unused colors up front.
Each operation now has an upper bound of expensiveness,
where the worst case scenario is basically popping
off the first element of a list of <= 24 colors.
The algorithm is now deterministic, too, to make
it easier to test. It's unclear whether random color
assignment ever had much benefit, and it made unit
testing the algorithm difficult. Now we have 100%
line coverage.
Fixes part of #10902.
We've had a few unpleasant bugs with real documentation links being
broken, so we're going to make this more aggressive for now.
I think we instead want a more subtle option for suppressing failures
in some places but not others.
This library was absolutely essential as part of our Python 2->3
migration process, but all of its calls should be either no-ops or
encode/decode operations.
Note also that the library has been wrong since the incorrect
refactoring in 1f9244e060.
Fixes#10807.
This seems like kind of a silly function to extract
to topic.py, but it will theoretically help us sweep
"subject" if we change the DB.
It had test coverage.
Even though you'd think these regexes would be
cached, compiling the regex outside of looping
through lines makes a difference.
My timings are 8.4s -> 6.0s. (You need to hack
on the linter to isolate the custom checks.)
We (lexically) remove "subject" from the conversion code. The
`build_message` helper calls `set_topic_name` under the hood,
so things still have "subject" in the JSON.
There was good code coverage on `build_message`.
Previously, a string ending in "... 😄" was reported as an
error and the linter complained that there should be a space
after the last ':'. This commit changes the pattern so that the
linter only checks for colons that are preceded by an opening
double-quote (").
We now prevent adding "subject" to any code in
zerver/lib, unless you specifically exempt it.
The new set called `FILES_WITH_LEGACY_SUJECT`
is also has comments that give a roadmap of what
to fix.
In tools/pre-commit line 18:
if [ -z "$VIRTUAL_ENV" ] && `which vagrant > /dev/null` && [ -e .vagrant ]; then
^-- SC2092: Remove backticks to avoid executing output.
^-- SC2006: Use $(..) instead of legacy `..`.
^-- SC2230: which is non-standard. Use builtin 'command -v' instead.
In tools/pre-commit line 23:
./tools/lint --no-gitlint --force $changed_files || true
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/setup/install-aws-server line 25:
zulip_root=${ZULIP_ROOT:-$HOME/zulip}
^-- SC2034: zulip_root appears unused. Verify use (or export if used externally).
In tools/setup/install-aws-server line 40:
if [ -n "$zulip_confdir" ]; then
^-- SC2154: zulip_confdir is referenced but not assigned.
In tools/setup/install-aws-server line 55:
VIRTUALENV_NEEDED=$(if $(echo "$type" | grep -q app_frontend); then echo -n yes; else echo -n no; fi)
^-- SC2091: Remove surrounding $() to avoid executing output.
In tools/setup/install-aws-server line 60:
SSH_OPTS=(-o HostKeyAlgorithms=ssh-rsa)
^-- SC2191: The = here is literal. To assign by index, use ( [index]=value ) with no spaces. To keep as literal, quote it.
In tools/setup/install-aws-server line 69:
ssh "${SSH_OPTS[@]}" "$server" -t -i "$amazon_key_file" -lroot <<EOF
^-- SC2087: Quote 'EOF' to make here document expansions happen on the server side rather than on the client.
In tools/setup/install-aws-server line 86:
ssh "${SSH_OPTS[@]}" "$server" -t -i "$amazon_key_file" -lroot <<EOF
^-- SC2087: Quote 'EOF' to make here document expansions happen on the server side rather than on the client.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/setup/postgres-init-dev-db line 10:
ROOT_POSTGRES="sudo -i -u "$DEFAULT_USER" psql"
^-- SC2027: The surrounding quotes actually unquote this. Remove or escape them.
In tools/setup/postgres-init-dev-db line 46:
echo 'ERROR: Try `sudo service postgresql start`?'
^-- SC2016: Expressions don't expand in single quotes, use double quotes for that.
In tools/setup/postgres-init-dev-db line 64:
PGPASS_ESCAPED_PREFIX="*:\*:\*:$USERNAME:"
^-- SC1117: Backslash is literal in "\*". Prefer explicit escaping: "\\*".
^-- SC1117: Backslash is literal in "\*". Prefer explicit escaping: "\\*".
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/django-template-graph line 10:
for t in $(find -name '*.html' -printf '%P\n'); do
^-- SC2044: For loops over find output are fragile. Use find -exec or a while read loop.
^-- SC2185: Some finds don't have a default path. Specify '.' explicitly.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/deploy-branch line 17:
[ $? -ne 0 ] && error_out "Unknown branch: $branch"
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
In tools/deploy-branch line 23:
if [ $? -eq 0 ]; then
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
In tools/deploy-branch line 35:
[ $? -ne 0 ] && error_out "Rebase onto origin/master failed"
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
In tools/deploy-branch line 39:
[ $? -ne 0 ] && error_out "Push of master to origin/master failed"
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/commit-msg line 9:
if [ $(grep '^[^#]' .git/COMMIT_EDITMSG --count) -ne 0 ]; then
^-- SC2046: Quote this to prevent word splitting.
In tools/commit-msg line 10:
lint_cmd="cd ~/zulip && cat \"$1\" | python -m gitlint.cli"
^-- SC2089: Quotes/backslashes will be treated literally. Use an array.
In tools/commit-msg line 11:
if [ -z "$VIRTUAL_ENV" ] && `which vagrant > /dev/null` && [ -e .vagrant ]; then
^-- SC2092: Remove backticks to avoid executing output.
^-- SC2006: Use $(..) instead of legacy `..`.
^-- SC2230: which is non-standard. Use builtin 'command -v' instead.
In tools/commit-msg line 14:
$lint_cmd
^-- SC2090: Quotes/backslashes in this variable will not be respected.
In tools/commit-msg line 17:
if [ $? -ne 0 ]; then
^-- SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/clean-branches line 33:
echo -n "Deleting local branch $(echo "$ref" | sed 's!^refs/heads/!!')"
^-- SC2001: See if you can use ${variable//search/replace} instead.
In tools/clean-branches line 41:
echo -n "Deleting local branch $(echo "$ref" | sed 's!^refs/heads/!!')"
^-- SC2001: See if you can use ${variable//search/replace} instead.
In tools/clean-branches line 49:
remote_name="$(echo "$ref" | sed 's!^refs/remotes/origin/!!')"
^-- SC2001: See if you can use ${variable//search/replace} instead.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/build-release-tarball line 50:
for i in `cat "$TMPDIR/$prefix/tools/release-tarball-exclude.txt"`; do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
^-- SC2006: Use $(..) instead of legacy `..`.
In tools/build-release-tarball line 51:
rm -r --interactive=never "$TMPDIR/$prefix/$i";
^-- SC2115: Use "${var:?}" to ensure this never expands to / .
In tools/build-release-tarball line 97:
echo; echo -ne "\033[33mRunning update-prod-static failed. "
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
In tools/build-release-tarball line 98:
echo -e "Check $TMPDIR/update-prod-static.log for more information.\033[0m"
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/build-docs line 3:
cd "$(dirname "$0")"/../docs
^-- SC2164: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This commit adds a test for the payload that is generated when
a Task is moved from one user story to another on Taiga's Sprint
Taskboard UI.
This commit also gets up this webhook's test coverage up to 100%.
We drop support for usage of `icon-vector` as base class when
including icons from font awesome icons package.
Now on, only icons as specified in font awesome v4.7.0 can be used
in the code base.
This module makes it really easy to create are-you-sure
dialogs for dangerous operations.
Basically it's one function with five parameters. You
give three chunks of HTML, a callback function, and
a parent container.
The first use of this will be in settings_user_groups,
coming up in a couple commits.
IFTTT allows custom templating for their payloads, so the onus is
on the user to ensure that their custom templates conform to the
expectations outlined in our IFTTT webhook docs. For that reason,
these payloads weren't generated, but were manually edited.
After discovering a couple of bugs, I decided to thoroughly test
and rewrite this integration from scratch. The older code wasn't
generating coherent messages.
This also commit gets this integration up to 100% test coverage.
Test coverage was improved by removing an unused function and
removing some code (written by me) that was actually handling
Test Hook event types incorrectly.
It was a painful amount of work to generate the actual payload.
Since the only difference was a small build URL, I manually
edited the payload and used that for testing.
This commit gets our GitHub webhook up to 100% test coverage.
Note that Freshdesk allows custom templating for outgoing payloads
in their webhook UI. Therefore, the payloads added in this commit
did not have to be official payloads from Freshdesk.
The lack of coverage was due to:
* An unused function that was never used anywhere.
* get_commit_status_changed_body was using a regex where it didn't
really need to use one. And there was an if statement that
assumed that the payload might NOT contain the URL to the commit.
However, I checked the payload and there shouldn't be any instances
where a commit event is generated but there is no URL to the commit.
* get_push_tag_body had an `else` condition that really can't happen
in any payload. I verified this by checking the BitBucket webhook
docs.
We shouldn't just ignore exceptions when encoding the incoming
auth credentials. Even if the incoming credentials are properly
encoded, it is better to know when that is the case or if
something else fails.
Apparently, Travis removed the Heroku bundle of packages from their
servers, which made the build start failing when trying to configure
apt to hold their versions (sigh). This commit removes the
problematic packages.
The companion tool `tools/reset-to-pull-request` has a handy feature
to maintain a local ref tracking the PR: e.g., pr/1234 for PR 1234.
If this were a normal remote-tracking branch maintained by `git fetch`,
it'd get updated on `git push`. Do the same thing here.
This helps keep a view like `gitk --all @` a bit tidier, by causing
merged PRs to stop pointing at side branches of the main history.
Before this change, the way we loaded
webpack for various tools was brittle.
First, I addressed test-api and test-help-documentation.
These tools used to be unable to run standalone on a
clean provision, because they were (indirectly)
calling tools/webpack without the `--test` option.
The problem was a bit obscure, since running things
like `./tools/test-backend` or `./tools/test-all` in
your workflow would create `./var/webpack-stats-test.json`
for the broken tools (and then they would work).
The tools themselves weren't broken; they were the
only relying on the common `test_server_running` helper.
And even that helper wasn't broken; it was just that
`run-dev.py` wasn't respecting the `--test` option.
So I made it so that `./tools/run-dev` passes in `--test` to
`./tools/webpack`.
To confuse matters even more, for some reason Casper
uses `./webpack-stats-production.json` via various
hacks for its webpack configuration, so when I fixed
the other tests, it broke Casper.
Here is the Casper-related hack in zproject/test_settings.py,
which was in place before my change and remains
after it:
if CASPER_TESTS:
WEBPACK_FILE = 'webpack-stats-production.json'
else:
WEBPACK_FILE = os.path.join('var', 'webpack-stats-test.json')
I added similar logic in tools/webpack:
if "CASPER_TESTS" in os.environ:
build_for_prod_or_casper(args.quiet)
I also made the helper functions in `./tools/webpack` have
nicer names.
So, now tools should all be able to run standalone and not
rely on previous tools creating webpack stats files for
them and leaving them in the file system. That's good.
Things are still a bit janky, though. It's not completely
clear to me why `test-js-with-casper` should work off of
a different webpack configuration than the other tests.
For now most of the jankiness is around Casper, and we have
hacks in two different places, `zproject/test_settings.py` and
`tools/webpack` to force it to use the production stats
file instead of the "test" one, even though Casper uses
test-like settings for other things like which database
you're using.
We don't use input.create_non_editable_pill() in our
code yet. If we add this back, we'll want to have node
tests on it.
Removing this unused code brings us to 100% line
coverage for input_pill.js.
This directly reverts 5c11ab85 with the small addition
of adding input_pill to our list of fully covered
modules.
These repositories (`zulip-ios-legacy` and `zulip-android`) are
deprecated, and as such should not have their own tabs, but still
should be included in the total contributions count.
Apparently, `puppet-lint` on Ubuntu trusty throws warnings for certain
quoting patterns that are OK in modern `puppet-lint`. I believe the
old Zulip code was actually correct (i.e. the old `puppet-lint`
implementation was the problem), but it seems worth changing anyway to
suppress the warnings.
We also exclude more of puppet-apt from linting, since it's
third-party code.
Instead of using a hardcoded value for spritesheet dimensions,
automatically calculate it using `emoji_data`. This will free
us from updating it only emoji datasource update as well as
allow us to add google blob emojiset.
Since this class was built, folks have always chosen
to subclass JsonableError for situations where
the default of ErrorCode.BAD_REQUEST is insufficient.
So now we simplify the use cases, which also gets
us 100% coverage on this core module.
For building Zulip in an environment where a custom CA certificate is
required to access the public Internet, one needs to be able to
specify that CA certificate for all network access done by the Zulip
installer/build process. This change allows configuring that via the
environment.
We start to use puppet-lint to lint puppet modules by default by
adding it to tools/lint (which controls our linter tool chain).
We also define a few puppet-lint rules to exclude.
Fixes: #9185.
This will help us in reducing the size of the release tarball
significantly. I have refrained from changing the `EMOJISETS`
constant in the `emoji_setup_utils.py` as that controls the
emojisets that we want to support. Since we want to re-enable
the feature of changing emojisets sometime again in the future
that variable should be kept as it is as it controls several
other things like emoji scripts that we use to generate emoji
names. Changing it might cause hard to catch bugs.
`emoji-datasource` package v4.0.4 introduced the concept of qualified
and non-qualified emoji codes. As chat programs don't need to use
emoji representation selector, so we used migrated our infrastructure
to use non-qualified emoji codes. But we missed the fact that the
emoji file names in emoji farm are based on emoji data's 'unified'
field and the value of this field has changed. Consequently the image
file names must also have been changed. We used `emoji_code` while
converting the span tags to img tags while processing notifications.
But since now `emoji_code` refers to non-qualified code while image
file names are based on qualified code, we need to rename images
to correctly do the conversion. This commit just fixes this.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
This ancient tool predates our practice of collecting test fixtures
for third-party integrations, which is a better general system for the
problem this solved.
After the messages have been imported, set the rendered_content of the
messages instead of leaving its value to be 'None'.
This is important to ensure that:
(1) Performance for users is good after completing the import.
(2) The database's full-text indexes have all of the imported messages
(which only happens properly when Message rows have their
rendered_content field edited).
Fixes#9168.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.