Minor fixes that enable the ability to:
- Re-run fetch-rebase-pull-request.
- Specify the name of the remote repo as an optional second parameter.
The default remains 'upstream'.
This saves a bunch of time building release tarballs, provisioning,
and upgrading Zulip from git that was spent regenerating the Zulip
emoji sprite sheet.
[commit message tweaked by tabbott]
(Most of this work was done by acrefoot in an earlier branch.
I took over the branch to fix casper tests that were broken during
the upgrade (which were fixed in a different commit). I also
made most of the changes to run-casper.)
This also upgrades phantomjs to 2.1.7.
The huge structural change here is that we no longer vendor casperjs
or download phantomjs with our own script. Instead, we just use
casperjs and phantomjs from npm, via package.json.
Another thing that we do now is run casperjs tests individually, so
that we don't get strange test flakes from test interactions. (Tests
can still influence each other in terms of changing data, since we
don't yet have code to clear the test database in between tests.)
A lot of this diff is just removing files and obsolete configurations.
The main new piece is in package.json, which causes npm to install the
new version.
Also, run-casper now runs files individually, as mentioned above.
We had vendored casperjs in the past. I didn't bring over any of our
changes. Some of the changes were performance-related (primarily
5fd58cf249), so the upgraded version may
be slower in some instances. (I didn't do much measurement of that,
since most of our slowness when running tests is about the setup
environment, not casper itself.) Any bug fixes that we may have
implemented in the past were either magically fixed by changes to
casper itself or by improvements we have made in the tests themselves
over the years.
Tim tested the Casper suite on his machine and running the full Casper
test suite is faster than it was before this change (1m30 vs. 1m50),
so we're at least not regressing overall performance.
Previously, running `tools/test-backend analytics/` (or any other test suite
name ending with a '/') would give a cryptic error about modules not
importing properly. This commit rstrip's the trailing slash from test suite
names given on the command line.
When running tools/provision.py in python3 mode, we used to create
a python2 venv called zulip-py2-twisted-venv. This was needed because
Zulip couldn't run tools/run-dev.py in python3. So we switched to
this virtualenv when running tools/run-dev.py.
Now that Zulip can run tools/run-dev.py in python3, we don't need
to create this virtualenv anymore.
- All necessary strings was converted to bytestring
- Added twisted as py3 dependency
- Change type annotation for method getchild of class Resource
- Remove activating python2 env section from run-dev.py script
Fixes#1256
generate-secrets.py now requires --development for development environment
setup or --production for production environment setup (and one of these
options is mandatory).
This solves the problem that it was somewhat easy to accidentally run
generate-secrets.py without the `-d` option while doing manual development
environment setup.
Fixes: #1911.
The new messages make it more obvious which services are started
from run-dev.py, and explicitly call out where to access the web
proxy to reach the Zulip web UI. This is a common confusion for
new administrators/developers. Messages are output before the
processes are launched, as run-dev.py does not currently have a
way to know if they started successfully.
Example output:
Starting Zulip services on ports: web proxy: 9991, Django: 9992, Tornado: 9993, webpack: 9994
Note: only port 9991 is exposed to the host in a Vagrant environment.
Alternate behavior for automated testing:
If run-dev.py is invoked with --test, don't include the webpack
port as it isn't used.
Tested on Ubuntu 14.04, by running run-dev.py at a shell prompt and
via the test-all script.
Fixes#1861
Previously, the generate-fixtures shell script by called into Django
multiple times in order to check whether the database was in a
reasonable state. Since there's a lot of overhead to starting up
Django, this resulted in `test-backend` and `test-js-with-casper`
being quite slow to run a single small test (2.8s or so) even on my
very fast laptop.
We fix this is by moving the checks into a new Python library, so that
we can avoid paying the Django startup overhead 3 times unnecessarily.
The result saves about 1.2s (~40%) from the time required to run a
single backend test.
Fixes#1221.
This is a first pass at building a framework for collecting various
stats about realms, users, streams, etc. Includes:
* New analytics tables for storing counts data
* Raw SQL queries for pulling data from zerver/models.py tables
* Aggregation functions for aggregating hourly stats into daily stats, and
aggregating user/stream level stats into realm level stats
* A management command for pulling the data
Note that counts.py was added to the linter exclude list due to errors
around %%s.
This optimizes the process of running individual or small groups of
backend tests (./tools/test-backend
zerver.tests.test_bugdown.FencedBlockPreprocessorTest.test_simple_quoting)
to allow the following syntaxes:
./tools/test-backend zerver/tests/test_bugdown.py
./tools/test-backend zerver.tests.test_bugdown.py
./tools/test-backend zerver/tests/test_bugdown
./tools/test-backend zerver.tests.test_bugdown
./tools/test-backend test_bugdown.py
./tools/test-backend test_bugdown
./tools/test-backend FencedBlockPreprocessorTest
./tools/test-backend FencedBlockPreprocessorTest.test_simple_quoting
Fixes#1670.
A bug in the node_cache.py code resulted in the node_modules symlink
in Zulip development environments being incorrectly owned by root.
This causes that bug to be fixed the next time a user provisions.
Probably most properly we should check for any number of spaces that
isn't 4, but that's a bit more work to do with our linter framework,
and in practice basically every CSS whitespace error we see is 2-space.
This adds an event listener (by way of delegation) to the
.message_inline_image elements that pops up the overlay and hides it
when the overlay exit is clicked.
Fixes#654.
NVM takes a specific node version and installs the node package and
a corresponding compatible npm package.
We use it in a somewhat hackish way to install node/npm globally with
a pinned version, since that's how we actually want to consume node in
our development environment.
Other details:
- Travis CI now is configured to use the version of node installed by
provision; the easiest way to do this was to sabotage the existing node
installation.
- jsdom is upgraded to a current version, which both requires recent
node and also is required for the tests to pass with recent node.
This fixes running the node tests on Xenial.
Fixes#1498.
[tweaked by tabbott]
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
We're now at the point where 100% of functions checked by mypy is
fully annotated; to avoid regressions, we're enforcing the requirement
that it stay this way. We still have a moderate amount of code that
is neither checked by mypy nor annotated, but it seems reasonable to
annotate that code at the same time as we get a chance to fix the mypy
issues in it.
This is implemented by using the --disallow-untyped-defs option in
mypy by default.
Because of some recent changes to the tokenizer, we no longer
need to call is_special_html_tag() to filter out special tags.
I also tried to make the start/end logic for pushing/popping
the stack more obvious.
This code is not directly related to the template parser, so it
can safely live in its own file.
The only significant change to the code is to the signature of
`html_branches` so that it can be called without requiring a file.
Since it's only used in html_grep, that has been updated to reflect
this change.
Fixes: #1774.
This hasn't been used since before Zulip was open source, and isn't
super reusable, so we can remove it. It'll always be there in the
history if someone ends up wanting it.
While we're at it, we remove the GitPython dependency (only used for
this tool) and the example MSMTP config for the review tool.
I don't think the black/white fallback code is actually used by
default, and thus when it is used, it's usually a sign that something
is broken. We should throw an error, but at the very least it makes
sense to print a warning.
In d583710f7c, I apparently broke the
color emoji handling, which was masked (for test purposes) by the fact
that we catch an expection if color doesn't work and in that case fall
back to black and white emoji.
This adds support for using VMWare Fusion as the Vagrant provider,
which has better performance than Virtualbox at the price of being
nonfree (in all senses of the term).
We haven't done solid benchmarking as to how much faster it is than
the Virtualbox provider.
Fixed IndexError when there is only zero or more whitespace characters
between < and >. (str.split() will return an empty list in this case, which
means there is no index 0.)
* Replace generic Exception with TemplateParserException.
* Add tests to cover many of the uncovered lines in
tools/lib/template_parser.py.
* Add an exclusion line to the naïve pattern for checking for missing
tuples in format strings, to keep the linter happy.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
test-js-with-node: Move istanbul test coverage to var/node-coverage.
This commit moves js test coverage generated through istanbul to
var/node-coverage.
We set the COVERAGE_FILE environment variable which controls the
output file path for the .coverage file produced by python-coverage,
and also move the mypy coverage file to that location as well.
Details:
Previously this hook required that you either be inside the vagrant
Zulip dev virtual machine when you ran git commit or that you had setup
your Zulip dev environment manually.
Now the script runs the linter via vagrant ssh if the following
conditions are met:
- VIRTUAL_ENV is not set
- vagrant is installed and a .vagrant directory exists in the repo
Otherwise the linter is run as it was before.
[tweaked to fix a few style things by tabbott]
This starts to address 1533. I still think the </p> tags
should be on their own line lined up with the start tag,
so the linter won't let through the specific example
shown in the ticket.
This fixes a confusing bug that we ran into where the output from
certain child processes wouldn't appear when running `lint-all` via
`ssh` (without a tty) into the Vagrant environment.
The previous model for these Nagios checks was kinda crazy -- every
minute, we'd run a full `rabbitmctl list_consumers` for each of the
dozen+ consumers that we have, and then do the exact same parsing
logic for each to determine whether the target queue has a running
consumer to write out a state file.
Because `rabbitmctl list_consumers` takes a small amount of resources,
on systems where CPU is very limited (e.g. t2 style AWS instances),
this minor CPU wastage could be problematic.
Now we just do that `rabbitmqctl list_consumers` once per minute, and
output all the state files from a single command.
Further TODO items on this front include removing the hardcoded list
of queues.
Because rabbitmq doesn't support changing the nodename of a running
rabbitmq node, Zulip installations suffered a plague of issues where
e.g. a Zulip server would reboot, the hostname would change, and
suddenly the local rabbitmq instance being used by Zulip would stop
working.
We address this problem by using, by default, a fixed rabbitmq
nodename, but providing server administrators the option to set the
rabbitmq nodename used by Zulip however they choose.
To upgrade an existing server to use this new configuration, one will
need to add something like the following to /etc/zulip/zulip.conf:
[rabbitmq]
nodename = zulip@localhost
However, I don't believe we have the puppet code in place to make this
work correctly at initial installation without rabbitmq-server being
already installed (but off), as we can easily setup in Travis CI but I
haven't been willing to do for the installer. So for now, this just
fixes our Travis CI problems.
Fixes: #1579.
Travis CI seems to have changed the way the snakeoil SSL certs are
generated in their infrastructure, so we need to update our expected
"success" HTTP headers accordingly.
It seems that we no longer get the message, 'zerver/lib/actions.py
modified; restarting server', but the server reloads successfully
nonetheless.
Fixes: #1341.
The find-add-class tool, when in lint mode, verifies that we can
understand all calls to addClass from our JS code.
When in non-lint mode, i.e. verbose mode, the tool prints out a
list of tuples of (fn, class) that we can use as we wish in other
tools.
We were ignoring singleton tags like "input" tags in
html-grep. This was an artifact of our tokenizer originally
being built to check indentation of templates, for which
singleton tags had been a distraction. This fix actually cleans up
the template checking logic as well, since it can now rely
on the tokenizer to classify special tags and singleton tags.
The tokenizer is more complete and more specific.
This reverts commit 3f95e567c1.
Apparently `apt-add-repository` fails periodically in CI. I suspect
this is some sort of silly networking problem, but given that all
we're saving is a few lines of code, the old version was better if
this fails basically ever.
Now, `tools/test-all` calls a new program called `tools/tests-tools`
that runs unit tests in `test_css_parser.py` and 'test_template_parser.py`.
This puts 100% line coverage on tools/lib/css_parser.py.
This puts about 50% line coverage on tools/lib/template_parser.py.
`tools/lint-all` now calls the new `tools/check-css`
The css_parser library parsers CSS into a data structure
that remembers line numbers and columns of semantically
meaningful tokens and adjoining white space/tokens. It
is intended to be used for various linting tasks.
The file `tools/check-css` runs a few files through the
parser and makes sure they round trip. This has some value
right away, as files that fail to parse will cause an
exception to be thrown and thus alert developers to syntax
errors. We expect to grow this into more advanced linting
tasks eventually.
`npm install` fails nondeterministically occasionally, and this makes
such failures likely to be automatically resolved in most cases by
simple retrying.
When running ./tools/test-backend, the script to generate
fixtures, ./tools/setup/generate-fixtures, looks for a file
called migration-status to determine whether it can short
circuit doing database migrations. This file got moved as
part of the effort to put files in "var," but the existence
check was still looking for that file in its old location.
Run '/puppet/zulip/files/nagios_plugins/zulip_app_frontend/check_send_receive_time'
script as 'zulip' user so that the connection to the database can be
made correctly.
Initialize Record by using __init__ instead of setting attributes
in validate. This is needed because mypy complains when we set
new attributes outside __init__.
Factor out the code in tools/provision.py which installs a python2
and python3 venv into a module (tools/setup/setup_venvs.py) which
can also be used as a script.
Twisted is not python 3 compatible. So for now create a python2
venv and install twisted in it when running provision.py in python3
mode and use twisted from the python2 venv.
Clear memcached when tools/run-dev.py is run. This prevents
errors on using a different python version because values are
pickled before being stored in memcached and different python
versions implement pickling differently.
Also provide a command-line option --no-clear-mc to prevent
memcached from being cleared.
tools/provision.py: Create directory var/uploads.
zproject/local_settings_template.py: Update Upload dir to var/uploads.
zproject/dev_settings.py: Update upload dir to var/uploads.
runtornado unbuffers its output using
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0).
This is not python 3 compatible since we can't specify
buffering on a text stream in python 3. So use the '-u'
option of python when calling runtornado.py to make output
unbuffered.
The purpose of this is to move a lot of the log and other generated
files used by the Zulip development environment into a consistent
hierarchy.
We also need to create this in tools/build-release-tarball as well,
since that runs a development environment out of a temporary
directory.
Instead of just checking .html files in the templates directory,
we now also check them everywhere, including .html files used
for Casper tests and some static files.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.
* Replace filter by list comprehension.
* Add '# type: ignore' to statements which use attributes from
modeule `posix`, since stubs for posix are missing on python 3.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.