zulip/frontend_tests/run-casper

148 lines
5.5 KiB
Plaintext
Raw Normal View History

#!/usr/bin/env python3
import argparse
import subprocess
import sys
import os
import glob
import shlex
#
# In order to use remote casperjs debugging, pass the --remote-debug flag
# This will start a remote debugging session listening on port 7777
#
2019-06-10 15:34:07 +02:00
# See https://zulip.readthedocs.io/en/latest/testing/testing-with-casper.html
# for more information on how to use remote debugging
#
ZULIP_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
os.environ["CASPER_TESTS"] = "1"
os.environ["PHANTOMJS_EXECUTABLE"] = os.path.join(ZULIP_PATH, "node_modules/.bin/phantomjs")
os.environ.pop("http_proxy", "")
os.environ.pop("https_proxy", "")
usage = """test-js-with-casper [options]
test-js-with-casper # Run all test files
test-js-with-casper 09-navigation.js # Run a single test file
test-js-with-casper 09 # Run a single test file 09-navigation.js
test-js-with-casper 01-login.js 03-narrow.js # Run a few test files
test-js-with-casper 01 03 # Run a few test files, 01-login.js and 03-narrow.js here"""
parser = argparse.ArgumentParser(usage)
parser.add_argument('--skip-flaky-tests', dest='skip_flaky',
action="store_true",
default=False, help='Skip flaky tests')
parser.add_argument('--interactive', dest='interactive',
action="store_true",
default=False, help='Run tests interactively')
parser.add_argument('--force', dest='force',
action="store_true",
default=False, help='Run tests despite possible problems.')
parser.add_argument('--verbose',
help='Whether or not to enable verbose mode',
action="store_true",
default=False)
parser.add_argument('--remote-debug',
help='Whether or not to enable remote debugging on port 7777',
action="store_true",
default=False)
parser.add_argument('--xunit-export', dest='xunit_export',
action="store_true",
default=False, help='Export the results of the test suite to an XUnit XML file,')
parser.add_argument('tests', nargs=argparse.REMAINDER,
help='Specific tests to run; by default, runs all tests')
options = parser.parse_args()
sys.path.insert(0, ZULIP_PATH)
# check for the venv
from tools.lib import sanity_check
sanity_check.check_venv(__file__)
from tools.lib.test_script import assert_provisioning_status_ok
from tools.lib.test_server import test_server_running
from typing import Iterable, List
assert_provisioning_status_ok(options.force)
os.chdir(ZULIP_PATH)
subprocess.check_call(['node', 'node_modules/phantomjs-prebuilt/install.js'])
os.makedirs('var/casper', exist_ok=True)
for f in glob.glob('var/casper/casper-failure*.png'):
os.remove(f)
def run_tests(files: Iterable[str], external_host: str) -> None:
test_dir = os.path.join(ZULIP_PATH, 'frontend_tests/casper_tests')
test_files = []
for file in files:
for file_name in os.listdir(test_dir):
if file_name.startswith(file):
file = file_name
break
if not os.path.exists(file):
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
file = os.path.join(test_dir, file)
test_files.append(os.path.abspath(file))
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
if not test_files:
test_files = sorted(glob.glob(os.path.join(test_dir, '*.js')))
2017-01-13 02:02:49 +01:00
# 10-admin.js is too flaky!
if options.skip_flaky:
test_files = [fn for fn in test_files if '10-admin' not in fn]
remote_debug = [] # type: List[str]
if options.remote_debug:
remote_debug = ["--remote-debugger-port=7777", "--remote-debugger-autorun=yes"]
verbose = [] # type: List[str]
if options.verbose:
verbose = ["--verbose", "--log-level=debug"]
xunit_export = [] # type: List[str]
if options.xunit_export:
xunit_export = ["--xunit=var/xunit-test-results/casper/result.xml"]
def run_tests() -> int:
ret = 1
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
for test_file in test_files:
test_name = os.path.basename(test_file)
cmd = ["node_modules/.bin/casperjs"] + remote_debug + verbose + xunit_export + ["test", test_file]
print("\n\n===================== %s\nRunning %s\n\n" % (test_name, " ".join(map(shlex.quote, cmd))), flush=True)
ret = subprocess.call(cmd)
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
if ret != 0:
return ret
return 0
with test_server_running(options.force, external_host):
# Important: do this next call inside the `with` block, when Django
# will be pointing at the test database.
subprocess.check_call('tools/setup/generate-test-credentials')
if options.interactive:
response = input('Press Enter to run tests, "q" to quit: ')
ret = 1
while response != 'q':
ret = run_tests()
if ret != 0:
response = input('Tests failed. Press Enter to re-run tests, "q" to quit: ')
else:
ret = 1
ret = run_tests()
if ret != 0:
print("""
The Casper frontend tests failed! For help debugging, read:
https://zulip.readthedocs.io/en/latest/testing/testing-with-casper.html""", file=sys.stderr)
if os.environ.get("CIRCLECI"):
print("", file=sys.stderr)
print("In CircleCI, the Artifacts tab contains screenshots of the failure.", file=sys.stderr)
print("", file=sys.stderr)
sys.exit(ret)
external_host = "zulipdev.com:9981"
run_tests(options.tests, external_host)
sys.exit(0)