zulip/frontend_tests/run-casper

132 lines
4.7 KiB
Plaintext
Raw Normal View History

#!/usr/bin/env python
from __future__ import print_function
import optparse
import subprocess
import sys
import os
import glob
try:
# We don't actually need typing, but it's a good guard for being
# outside a Zulip virtualenv.
from typing import Iterable
except ImportError as e:
print("ImportError: {}".format(e))
print("You need to run the Zulip tests inside a Zulip dev environment.")
print("If you are using Vagrant, you can `vagrant ssh` to enter the Vagrant guest.")
sys.exit(1)
#
# In order to use remote casperjs debugging, pass the --remote-debug flag
# This will start a remote debugging session listening on port 7777
#
# See https://wiki.zulip.net/wiki/Testing_the_app for more information
# on how to use remote debugging
#
os.environ["CASPER_TESTS"] = "1"
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
os.environ["PHANTOMJS_EXECUTABLE"] = os.path.join(os.path.dirname(__file__), "../node_modules/.bin/phantomjs")
usage = """%prog [options]
test-js-with-casper # Run all test files
test-js-with-casper 09-navigation.js # Run a single test file
test-js-with-casper 09 # Run a single test file 09-navigation.js
test-js-with-casper 01-login.js 03-narrow.js # Run a few test files
test-js-with-casper 01 03 # Run a few test files, 01-login.js and 03-narrow.js here"""
parser = optparse.OptionParser(usage)
2017-01-13 02:02:49 +01:00
parser.add_option('--skip-flaky-tests', dest='skip_flaky',
action="store_true",
default=False, help='Skip flaky tests')
2016-10-15 17:39:27 +02:00
parser.add_option('--force', dest='force',
action="store_true",
default=False, help='Run tests despite possible problems.')
parser.add_option('--remote-debug',
help='Whether or not to enable remote debugging on port 7777',
action="store_true",
default=False)
(options, args) = parser.parse_args()
TOOLS_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.dirname(TOOLS_DIR))
from tools.lib.test_script import get_provisioning_status
from tools.lib.test_server import test_server_running
2016-10-15 17:39:27 +02:00
if not options.force:
ok, msg = get_provisioning_status()
if not ok:
print(msg)
print('If you really know what you are doing, use --force to run anyway.')
sys.exit(1)
os.chdir(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))
subprocess.check_call('tools/setup/generate-test-credentials')
subprocess.check_call(['mkdir', '-p', 'var/casper'])
subprocess.check_call(['rm', '-f'] + glob.glob('var/casper/casper-failure*.png'))
LOG_FILE = 'var/casper/server.log'
if os.path.exists(LOG_FILE) and os.path.getsize(LOG_FILE) < 100000:
log = open(LOG_FILE, 'a')
log.write('\n\n')
else:
log = open(LOG_FILE, 'w')
def run_tests(realms_have_subdomains, files, external_host):
# type: (bool, Iterable[str], str) -> None
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
test_dir = os.path.join(os.path.dirname(__file__), '../frontend_tests/casper_tests')
test_files = []
for file in files:
for file_name in os.listdir(test_dir):
if file_name.startswith(file):
file = file_name
break
if not os.path.exists(file):
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
file = os.path.join(test_dir, file)
test_files.append(os.path.abspath(file))
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
if not test_files:
test_files = sorted(glob.glob(os.path.join(test_dir, '*.js')))
2017-01-13 02:02:49 +01:00
# 10-admin.js is too flaky!
if options.skip_flaky:
test_files = [fn for fn in test_files if '10-admin' not in fn]
remote_debug = ""
if options.remote_debug:
remote_debug = "--remote-debugger-port=7777 --remote-debugger-autorun=yes"
with test_server_running(options.force, external_host, log, dots=True):
ret = 1
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
for test_file in test_files:
cmd = "node_modules/.bin/casperjs %s test --subdomains=%s %s" % (
remote_debug, realms_have_subdomains, test_file)
print("\n\nRunning %s" % (cmd,))
ret = subprocess.call(cmd, shell=True)
if ret != 0:
break
if ret != 0:
print("""
Oops, the frontend tests failed. Tips for debugging:
* Check the frontend test server logs at %s
* Check the screenshots of failed tests at var/casper/casper-failure*.png
* Try remote debugging the test web browser as described in docs/testing-with-casper.md
""" % (LOG_FILE,), file=sys.stderr)
sys.exit(ret)
external_host = "localhost:9981"
# First, run all tests with REALMS_HAVE_SUBDOMAINS set to False
run_tests(False, args, external_host)
Upgrade caspersjs to version 1.1.3. (w/acrefoot) (Most of this work was done by acrefoot in an earlier branch. I took over the branch to fix casper tests that were broken during the upgrade (which were fixed in a different commit). I also made most of the changes to run-casper.) This also upgrades phantomjs to 2.1.7. The huge structural change here is that we no longer vendor casperjs or download phantomjs with our own script. Instead, we just use casperjs and phantomjs from npm, via package.json. Another thing that we do now is run casperjs tests individually, so that we don't get strange test flakes from test interactions. (Tests can still influence each other in terms of changing data, since we don't yet have code to clear the test database in between tests.) A lot of this diff is just removing files and obsolete configurations. The main new piece is in package.json, which causes npm to install the new version. Also, run-casper now runs files individually, as mentioned above. We had vendored casperjs in the past. I didn't bring over any of our changes. Some of the changes were performance-related (primarily 5fd58cf24927359dce26588d59690c40c6ce6d4c), so the upgraded version may be slower in some instances. (I didn't do much measurement of that, since most of our slowness when running tests is about the setup environment, not casper itself.) Any bug fixes that we may have implemented in the past were either magically fixed by changes to casper itself or by improvements we have made in the tests themselves over the years. Tim tested the Casper suite on his machine and running the full Casper test suite is faster than it was before this change (1m30 vs. 1m50), so we're at least not regressing overall performance.
2016-10-07 18:20:59 +02:00
# Now run a subset of the tests with REALMS_HAVE_SUBDOMAINS set to True
os.environ["REALMS_HAVE_SUBDOMAINS"] = "True"
external_host = "zulipdev.com:9981"
if len(args) == 0:
run_tests(True, ["00-realm-creation.js", "01-login.js", "02-site.js"], external_host)
else:
run_tests(True, args, external_host)
sys.exit(0)