tools/travis/py3k used to only check files whose names ended with .py.
Now it also checks python scripts which don't have an extension.
It uses tools/lister.py to get a list of all python files.
This has the side effect of making lint-all check all shell scripts,
not just those under scripts/, tools/, and bin/.
[commit message expanded by tabbott]
Make module tools/lister.py which lists all files in a directory
tracked by git. This is done because lister.py will be used by other
scripts in the future which have to introspect files in the repository,
like linters, static code checkers, etc.
This should fix a problem we've been having with errors downloading
the PhantomJS packages from their original hosting service.
Eventually we should move it to an S3 bucket.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
This automatically loads settings, zerver.models.* and
zerver.lib.actions.* when you start `manage.py shell`, which should
save a bit of time basically every time someone uses it.
Fixes#275.
Add call to tools/generate-fixtures in tools/test-backend before
starting the tests. Previously, test-backend could fail if called
after tools/test-js-with-casper had failed.
Fixes#501.
manage_args is set to a list of arguments a few lines later in the
function, making this initialization as the empty string useless and
confusing.
Discovered using mypy.
It's not clear whether this will end up being net negative in value in
the long term since it's kinda hard to understand the output, but in
the short term it should prevent regressions.
It's possible we should just eliminate this mechanism, but this fixes
a proximal problem where the multi-line get_subscribers endpoint
description was being handled wrong.
Travis CI's model of installing every version of postgres on the test
VM and then shutting all the versions other than the one requested
down seems to not work very well with doing apt upgrades. It seems
the best way to resolve this is to just uninstall the versions we
don't need.
The point of the lock is to prevent two deployments happening at the
same time and racing with each other, not to prevent doing any future
deployments after an error happens (which is what the current
implementation does in practice).
Addresses part of #208.
We ran into a bug with the Travis CI infrastructure where it postgres
9.1 is installed on the system, and so when we'd do an apt upgrade
with a new version of 9.1, the 9.1 daemon would end up getting started
and conflict with the 9.3 daemon we were trying to run.
Django's `manage.py runserver` prints a relatively low-information log
line for every request of the form:
[14/Dec/2015 00:43:06]"GET /static/js/message_list.js HTTP/1.0" 200 21969
This is pretty spammy, especially given that we already have our own
middleware printing a more detailed version of the same log lines:
2015-12-14 00:43:06,935 INFO 127.0.0.1 GET 200 0ms /static/js/message_list.js (unauth via ?)
Since runserver doesn't have support controlling whether these log
lines are printed, we wrap it with a small bit of code that silences
the log lines for 200/304 requests (aka the uninteresting ones).
Several of these rules only apply to one of Python and Javascript, and
this simplifies the logic and should make our linter code more readable.
In the process, we add support for per-rule/file pair exclusions to
handle the tab exception for codehilite.py.
The #! line processing interpreted the argument to pass to `env` as
"python2.7 -u", which obviously isn't a real program.
We fix this by setting the PYTHONUNBUFFERED environment variable
inside the program, which has the same effect.
Thanks to Dan Fedele for the bug report and suggested solution!
This test caught a few bugs where refactoring had made management
commands fail (and would have caught a few more recent ones).
Ideally we'd replace this with a more advanced test that actually
tests that the management command do something useful, but it's a
start.
The node packages 'jQuery' and 'jquery' are different--'jQuery' is the
legacy support package that is needed for Zulip so the require statements
in the tests were updated.
Travis uses node 4.0 by default and we are using 0.10, so the command to
install the correct version had to be added to the .travis.yml file.
This tests whether a new patch introduces any regressions related to
any of the Python 3 compatibility fixers we've run in the past, so
that we can make continuous forward progress on our path towards
Python 3 compatibility.
This produces error output that looks like this:
"""
Testing for additions of Python 2 patterns we've removed as part of moving towards Python 3 compatibility.
Running Python 3 compatibility test lib2to3.fixes.fix_apply
Running Python 3 compatibility test lib2to3.fixes.fix_except
diff --git a/zerver/views/__init__.py b/zerver/views/__init__.py
index b5c0102..2defd46 100644
--- a/zerver/views/__init__.py
+++ b/zerver/views/__init__.py
@@ -296,7 +296,7 @@ def accounts_register(request):
do_activate_user(user_profile)
do_change_password(user_profile, password)
do_change_full_name(user_profile, full_name)
- except UserProfile.DoesNotExist, e:
+ except UserProfile.DoesNotExist as e:
user_profile = do_create_user(email, password, realm, full_name, short_name,
prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
Python 3 compatibility error(s) detected! See diff above for what you need to change.
"""
Running check-templates test fails when there are 'blocktrans' tags in
django templates. The fix is to add 'blocktrans' to
is_django_block_tag function in check-templates.
With this change, we are now testing the production static asset
pipeline and installation process in a new testing job (and also run
the frontend/backend tests separately).
This means that changes that break the Zulip static asset pipeline or
production installation process are more likely to fail tests. The
testing is imperfect in that it does not have proper isolation -- we
build a complete Zulip development environment and then install a
Zulip production environment on top of it, so e.g. any apt
dependencies installed for Zulip development will still be available
for the Zulip production environment. But, it's better than nothing!
A good v2 of this would be to have the production setup process just
install the minimum stuff needed to run `build-release-tarball` and
then uninstall it / clean it up so that we can do a more clear
production installation, but that's more work.
This fixes an annoying issue where one tries to rebuild the database,
and it fails due to there being existing connections.
The one thing that is potentially scary about this implementation is
that it means it's now a lot easier to accidentally drop your
production database by running the wrong script; might be worth adding
a "--force" flag controlling this behavior or something.
Thanks to Nemanja Stanarevic and Neeraj Wahi for prototypes of this
implementation! They did most of the work and testing for this.
This fixes some issues that we've had where commands will fail is
confusing ways after the database is rebuilt because data from before
the database was dropped is still in the memcached cache.