Because of some recent changes to the tokenizer, we no longer
need to call is_special_html_tag() to filter out special tags.
I also tried to make the start/end logic for pushing/popping
the stack more obvious.
This code is not directly related to the template parser, so it
can safely live in its own file.
The only significant change to the code is to the signature of
`html_branches` so that it can be called without requiring a file.
Since it's only used in html_grep, that has been updated to reflect
this change.
Fixes: #1774.
This hasn't been used since before Zulip was open source, and isn't
super reusable, so we can remove it. It'll always be there in the
history if someone ends up wanting it.
While we're at it, we remove the GitPython dependency (only used for
this tool) and the example MSMTP config for the review tool.
I don't think the black/white fallback code is actually used by
default, and thus when it is used, it's usually a sign that something
is broken. We should throw an error, but at the very least it makes
sense to print a warning.
In d583710f7c, I apparently broke the
color emoji handling, which was masked (for test purposes) by the fact
that we catch an expection if color doesn't work and in that case fall
back to black and white emoji.
This adds support for using VMWare Fusion as the Vagrant provider,
which has better performance than Virtualbox at the price of being
nonfree (in all senses of the term).
We haven't done solid benchmarking as to how much faster it is than
the Virtualbox provider.
Fixed IndexError when there is only zero or more whitespace characters
between < and >. (str.split() will return an empty list in this case, which
means there is no index 0.)
* Replace generic Exception with TemplateParserException.
* Add tests to cover many of the uncovered lines in
tools/lib/template_parser.py.
* Add an exclusion line to the naïve pattern for checking for missing
tuples in format strings, to keep the linter happy.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
test-js-with-node: Move istanbul test coverage to var/node-coverage.
This commit moves js test coverage generated through istanbul to
var/node-coverage.
We set the COVERAGE_FILE environment variable which controls the
output file path for the .coverage file produced by python-coverage,
and also move the mypy coverage file to that location as well.
Details:
Previously this hook required that you either be inside the vagrant
Zulip dev virtual machine when you ran git commit or that you had setup
your Zulip dev environment manually.
Now the script runs the linter via vagrant ssh if the following
conditions are met:
- VIRTUAL_ENV is not set
- vagrant is installed and a .vagrant directory exists in the repo
Otherwise the linter is run as it was before.
[tweaked to fix a few style things by tabbott]
This starts to address 1533. I still think the </p> tags
should be on their own line lined up with the start tag,
so the linter won't let through the specific example
shown in the ticket.
This fixes a confusing bug that we ran into where the output from
certain child processes wouldn't appear when running `lint-all` via
`ssh` (without a tty) into the Vagrant environment.
The previous model for these Nagios checks was kinda crazy -- every
minute, we'd run a full `rabbitmctl list_consumers` for each of the
dozen+ consumers that we have, and then do the exact same parsing
logic for each to determine whether the target queue has a running
consumer to write out a state file.
Because `rabbitmctl list_consumers` takes a small amount of resources,
on systems where CPU is very limited (e.g. t2 style AWS instances),
this minor CPU wastage could be problematic.
Now we just do that `rabbitmqctl list_consumers` once per minute, and
output all the state files from a single command.
Further TODO items on this front include removing the hardcoded list
of queues.
Because rabbitmq doesn't support changing the nodename of a running
rabbitmq node, Zulip installations suffered a plague of issues where
e.g. a Zulip server would reboot, the hostname would change, and
suddenly the local rabbitmq instance being used by Zulip would stop
working.
We address this problem by using, by default, a fixed rabbitmq
nodename, but providing server administrators the option to set the
rabbitmq nodename used by Zulip however they choose.
To upgrade an existing server to use this new configuration, one will
need to add something like the following to /etc/zulip/zulip.conf:
[rabbitmq]
nodename = zulip@localhost
However, I don't believe we have the puppet code in place to make this
work correctly at initial installation without rabbitmq-server being
already installed (but off), as we can easily setup in Travis CI but I
haven't been willing to do for the installer. So for now, this just
fixes our Travis CI problems.
Fixes: #1579.
Travis CI seems to have changed the way the snakeoil SSL certs are
generated in their infrastructure, so we need to update our expected
"success" HTTP headers accordingly.