In tools/build-release-tarball line 50:
for i in `cat "$TMPDIR/$prefix/tools/release-tarball-exclude.txt"`; do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
^-- SC2006: Use $(..) instead of legacy `..`.
In tools/build-release-tarball line 51:
rm -r --interactive=never "$TMPDIR/$prefix/$i";
^-- SC2115: Use "${var:?}" to ensure this never expands to / .
In tools/build-release-tarball line 97:
echo; echo -ne "\033[33mRunning update-prod-static failed. "
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
In tools/build-release-tarball line 98:
echo -e "Check $TMPDIR/update-prod-static.log for more information.\033[0m"
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/build-docs line 3:
cd "$(dirname "$0")"/../docs
^-- SC2164: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This commit adds a test for the payload that is generated when
a Task is moved from one user story to another on Taiga's Sprint
Taskboard UI.
This commit also gets up this webhook's test coverage up to 100%.
We drop support for usage of `icon-vector` as base class when
including icons from font awesome icons package.
Now on, only icons as specified in font awesome v4.7.0 can be used
in the code base.
This module makes it really easy to create are-you-sure
dialogs for dangerous operations.
Basically it's one function with five parameters. You
give three chunks of HTML, a callback function, and
a parent container.
The first use of this will be in settings_user_groups,
coming up in a couple commits.
IFTTT allows custom templating for their payloads, so the onus is
on the user to ensure that their custom templates conform to the
expectations outlined in our IFTTT webhook docs. For that reason,
these payloads weren't generated, but were manually edited.
After discovering a couple of bugs, I decided to thoroughly test
and rewrite this integration from scratch. The older code wasn't
generating coherent messages.
This also commit gets this integration up to 100% test coverage.
Test coverage was improved by removing an unused function and
removing some code (written by me) that was actually handling
Test Hook event types incorrectly.
It was a painful amount of work to generate the actual payload.
Since the only difference was a small build URL, I manually
edited the payload and used that for testing.
This commit gets our GitHub webhook up to 100% test coverage.
Note that Freshdesk allows custom templating for outgoing payloads
in their webhook UI. Therefore, the payloads added in this commit
did not have to be official payloads from Freshdesk.
The lack of coverage was due to:
* An unused function that was never used anywhere.
* get_commit_status_changed_body was using a regex where it didn't
really need to use one. And there was an if statement that
assumed that the payload might NOT contain the URL to the commit.
However, I checked the payload and there shouldn't be any instances
where a commit event is generated but there is no URL to the commit.
* get_push_tag_body had an `else` condition that really can't happen
in any payload. I verified this by checking the BitBucket webhook
docs.
We shouldn't just ignore exceptions when encoding the incoming
auth credentials. Even if the incoming credentials are properly
encoded, it is better to know when that is the case or if
something else fails.
Apparently, Travis removed the Heroku bundle of packages from their
servers, which made the build start failing when trying to configure
apt to hold their versions (sigh). This commit removes the
problematic packages.
The companion tool `tools/reset-to-pull-request` has a handy feature
to maintain a local ref tracking the PR: e.g., pr/1234 for PR 1234.
If this were a normal remote-tracking branch maintained by `git fetch`,
it'd get updated on `git push`. Do the same thing here.
This helps keep a view like `gitk --all @` a bit tidier, by causing
merged PRs to stop pointing at side branches of the main history.
Before this change, the way we loaded
webpack for various tools was brittle.
First, I addressed test-api and test-help-documentation.
These tools used to be unable to run standalone on a
clean provision, because they were (indirectly)
calling tools/webpack without the `--test` option.
The problem was a bit obscure, since running things
like `./tools/test-backend` or `./tools/test-all` in
your workflow would create `./var/webpack-stats-test.json`
for the broken tools (and then they would work).
The tools themselves weren't broken; they were the
only relying on the common `test_server_running` helper.
And even that helper wasn't broken; it was just that
`run-dev.py` wasn't respecting the `--test` option.
So I made it so that `./tools/run-dev` passes in `--test` to
`./tools/webpack`.
To confuse matters even more, for some reason Casper
uses `./webpack-stats-production.json` via various
hacks for its webpack configuration, so when I fixed
the other tests, it broke Casper.
Here is the Casper-related hack in zproject/test_settings.py,
which was in place before my change and remains
after it:
if CASPER_TESTS:
WEBPACK_FILE = 'webpack-stats-production.json'
else:
WEBPACK_FILE = os.path.join('var', 'webpack-stats-test.json')
I added similar logic in tools/webpack:
if "CASPER_TESTS" in os.environ:
build_for_prod_or_casper(args.quiet)
I also made the helper functions in `./tools/webpack` have
nicer names.
So, now tools should all be able to run standalone and not
rely on previous tools creating webpack stats files for
them and leaving them in the file system. That's good.
Things are still a bit janky, though. It's not completely
clear to me why `test-js-with-casper` should work off of
a different webpack configuration than the other tests.
For now most of the jankiness is around Casper, and we have
hacks in two different places, `zproject/test_settings.py` and
`tools/webpack` to force it to use the production stats
file instead of the "test" one, even though Casper uses
test-like settings for other things like which database
you're using.
We don't use input.create_non_editable_pill() in our
code yet. If we add this back, we'll want to have node
tests on it.
Removing this unused code brings us to 100% line
coverage for input_pill.js.
This directly reverts 5c11ab85 with the small addition
of adding input_pill to our list of fully covered
modules.
These repositories (`zulip-ios-legacy` and `zulip-android`) are
deprecated, and as such should not have their own tabs, but still
should be included in the total contributions count.
Apparently, `puppet-lint` on Ubuntu trusty throws warnings for certain
quoting patterns that are OK in modern `puppet-lint`. I believe the
old Zulip code was actually correct (i.e. the old `puppet-lint`
implementation was the problem), but it seems worth changing anyway to
suppress the warnings.
We also exclude more of puppet-apt from linting, since it's
third-party code.
Instead of using a hardcoded value for spritesheet dimensions,
automatically calculate it using `emoji_data`. This will free
us from updating it only emoji datasource update as well as
allow us to add google blob emojiset.
Since this class was built, folks have always chosen
to subclass JsonableError for situations where
the default of ErrorCode.BAD_REQUEST is insufficient.
So now we simplify the use cases, which also gets
us 100% coverage on this core module.
For building Zulip in an environment where a custom CA certificate is
required to access the public Internet, one needs to be able to
specify that CA certificate for all network access done by the Zulip
installer/build process. This change allows configuring that via the
environment.
We start to use puppet-lint to lint puppet modules by default by
adding it to tools/lint (which controls our linter tool chain).
We also define a few puppet-lint rules to exclude.
Fixes: #9185.
This will help us in reducing the size of the release tarball
significantly. I have refrained from changing the `EMOJISETS`
constant in the `emoji_setup_utils.py` as that controls the
emojisets that we want to support. Since we want to re-enable
the feature of changing emojisets sometime again in the future
that variable should be kept as it is as it controls several
other things like emoji scripts that we use to generate emoji
names. Changing it might cause hard to catch bugs.
`emoji-datasource` package v4.0.4 introduced the concept of qualified
and non-qualified emoji codes. As chat programs don't need to use
emoji representation selector, so we used migrated our infrastructure
to use non-qualified emoji codes. But we missed the fact that the
emoji file names in emoji farm are based on emoji data's 'unified'
field and the value of this field has changed. Consequently the image
file names must also have been changed. We used `emoji_code` while
converting the span tags to img tags while processing notifications.
But since now `emoji_code` refers to non-qualified code while image
file names are based on qualified code, we need to rename images
to correctly do the conversion. This commit just fixes this.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
This ancient tool predates our practice of collecting test fixtures
for third-party integrations, which is a better general system for the
problem this solved.
After the messages have been imported, set the rendered_content of the
messages instead of leaving its value to be 'None'.
This is important to ensure that:
(1) Performance for users is good after completing the import.
(2) The database's full-text indexes have all of the imported messages
(which only happens properly when Message rows have their
rendered_content field edited).
Fixes#9168.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.
In tools/update-locked-requirements line 66:
compile_requirements requirements/prod.in $OUTPUT_BASE_DIR/prod.txt
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/update-locked-requirements line 67:
compile_requirements requirements/dev.in $OUTPUT_BASE_DIR/dev.txt
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/update-locked-requirements line 68:
compile_requirements requirements/mypy.in $OUTPUT_BASE_DIR/mypy.txt
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/update-locked-requirements line 69:
compile_requirements requirements/docs.in $OUTPUT_BASE_DIR/docs.txt
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/update-locked-requirements line 70:
compile_requirements requirements/thumbor.in $OUTPUT_BASE_DIR/thumbor.txt py2
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/travis/production-helper line 24:
if ! apt-get dist-upgrade -y $APT_OPTIONS; then
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/travis/production-helper line 26:
apt-get dist-upgrade -y $APT_OPTIONS
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/test-migrations line 18:
echo "$new_auto_named_migrations" | sed 's/\[[x ]\] / /'
^-- SC2001: See if you can use ${variable//search/replace} instead.
In tools/test-migrations line 27:
echo 'ERROR: Migrations are not consistent with models! Fix with `./tools/renumber-migrations`.'
^-- SC2016: Expressions don't expand in single quotes, use double quotes for that.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/test-install/destroy-all line 31:
| while read c
^-- SC2162: read without -r will mangle backslashes.
In tools/test-install/install line 57:
installer_dir="$(readlink -f $INSTALLER)"
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/test-install/lxc-wait line 30:
for i in {1..60}; do
^-- SC2034: i appears unused. Verify use (or export if used externally).
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/test-documentation line 6:
echo -e "\e[${color_code}m${message}\e[0m" >&2
^-- SC1117: Backslash is literal in "\e". Prefer explicit escaping: "\\e".
^-- SC1117: Backslash is literal in "\e". Prefer explicit escaping: "\\e".
In tools/test-documentation line 41:
scrapy crawl_with_status documentation_crawler $loglevel
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/test-all-docker line 7:
source /home/zulip/.bash_profile
^-- SC1091: Not following: /home/zulip/.bash_profile: openBinaryFile: does not exist (No such file or directory)
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/test-all line 7:
TEMP=`getopt -o f --long force -- "$@"`
^-- SC2006: Use $(..) instead of legacy `..`.
In tools/test-all line 24:
echo "Running $@"
^-- SC2145: Argument mixes string and array. Use * or separate argument.
In tools/test-all line 26:
printf "\n\e[31;1mFAILED\e[0m $@\n"
^-- SC2059: Don't use variables in the printf format string. Use printf "..%s.." "$foo".
^-- SC1117: Backslash is literal in "\n". Prefer explicit escaping: "\\n".
^-- SC1117: Backslash is literal in "\e". Prefer explicit escaping: "\\e".
^-- SC1117: Backslash is literal in "\e". Prefer explicit escaping: "\\e".
^-- SC2145: Argument mixes string and array. Use * or separate argument.
^-- SC1117: Backslash is literal in "\n". Prefer explicit escaping: "\\n".
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/start-dockers line 7:
source /home/zulip/.bash_profile
^-- SC1091: Not following: /home/zulip/.bash_profile: openBinaryFile: does not exist (No such file or directory)
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/reset-to-pull-request line 25:
git fetch "$remote" +"pull/$request_id/head":"$target_ref"
^-- SC2140: Word is of the form "A"B"C" (B indicated). Did you mean "ABC" or "A\"B\"C"?
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/provision line 13:
FAIL="\033[91m"
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
In tools/provision line 14:
WARNING="\033[93m"
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
In tools/provision line 15:
ENDC="\033[0m"
^-- SC1117: Backslash is literal in "\0". Prefer explicit escaping: "\\0".
In tools/provision line 19:
PARENT_PATH=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
^-- SC2128: Expanding an array without an index only gives the first element.
In tools/provision line 32:
if [ $failed = 1 ]; then
^-- SC2086: Double quote to prevent globbing and word splitting.
In tools/provision line 49:
echo 'or just close this shell and start a new one (with Vagrant, `vagrant ssh`).'
^-- SC2016: Expressions don't expand in single quotes, use double quotes for that.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/optimize-svg line 3:
if [ `node_modules/.bin/svgo -f static/images/integrations/logos | grep -o '\.[0-9]% = ' | wc -l` -ge 1 ]
^-- SC2046: Quote this to prevent word splitting.
^-- SC2006: Use $(..) instead of legacy `..`.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/find-unused-css line 5:
if [ $(git grep "$n" | grep -v '^static/styles/zulip.css' | wc -l) -eq 0 ]; then
^-- SC2046: Quote this to prevent word splitting.
^-- SC2126: Consider using grep -c instead of grep|wc -l.
In tools/find-unused-css line 6:
echo $n
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/fetch-rebase-pull-request line 15:
git checkout -B "review-${request_id}" $remote/master
^-- SC2086: Double quote to prevent globbing and word splitting.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
In tools/do-destroy-rebuild-test-database line 6:
"`dirname "$0"`/../tools/setup/generate-fixtures" --force
^-- SC2006: Use $(..) instead of legacy `..`.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
We probably should have done this a while ago, even
though these functions are pretty tiny. The goal here
is to make it easier to have more consistent search
semantics.
Our first use case is subs.js. In this case we
are able to decouple a bit of generic string
matching from the subs-specific code.
I often find myself looking manually through the reflog of `master` to
find a commit I previously reset to with tools/reset-to-pull-request .
Sometimes I want to see a previous version of a PR I'm reviewing a
revised version of; sometimes to look at two related PRs together.
So, here's a feature to automate that by saving each PR branch in its
own ref, with a name like `refs/remotes/pr/1234` -- or `pr/1234`, as
you'd normally refer to it.
To enable this, set the new config option:
$ git config zulip.prPseudoRemote pr
(Or you can pick another name.)
The reason I hesitate to just make this the behavior for everyone
immediately is that the resulting `pr/1234` refs will naturally
accumulate and may clutter up the view -- and because with the
`refs/remotes/` style of name I've chosen, it requires a bit of
Git plumbing to clean them up. (Use `git update-ref -d`.)
I'll play with it and iterate; comments welcome from other willing
early adopters.
This commit closes a long pending issue which involved moving the
`EMOTICON_CONVERSION` mapping to build_emoji infrastructure so
that there is only one source of truth. This was pending from the
time when this feature was implemented.
This commit updates the `emoji-datasource` packages to version 4.0.4.
This update brings following changes to emoji infra:
1: Fix for the bleeding sprite sheets.
2: The category of some emojis has been changed. Categorywise breakup of
net gain or loss is as follows:
Travel & Places: 58 (gain)
Symbols: 47 (loss)
Smileys & People: 52 (gain)
Objects: 11 (loss)
Food & Drink: 3 (gain)
Animals and Nature: 46 (gain)
Activities: 9 (loss)
3: There were some changes in the image farm of the package which were
breaking our old emoji farm. I fixed them by modifying the remapped
emoji map.
Fixes: #8235.
Google emojiset's octopus is really cute and whole Zulip community
loves it. So using a CSS hack, we hardcode octopus emoji to use image
from Google's emojiset only irrespective of the choosen emojiset.
Our CSS checker globs for .css files. Since the
SCSS cutover, it has been a no-op, so there's no
sense launching it. See #8894 for details on
future plans.
This migrates Zulip to use a dramatically better set of names and
aliases for our emoji set, defined in emoji_names.py (which is in turn
manually generated from our hand-curated CSV file).
This should significantly improve the experience of using Zulip's
emoji picker and emoji typeahead for finding what one is looking for.
Credits to @rishig, Alice Lai, and @rntharu for naming all the emoji.
Names are inspired by iamcal, gemoji, and unicode names, sources like
emojipedia and iemoji, google search results for articles about emoji,
and emoji usage on twitter.
We were already correctly including libssl-dev in Zulip's dependencies
in development environment provisioning, but (at least now) it's
needed to build certain Python packages like pycurl when building a
Zulip virtualenv in production. I haven't investigated why we didn't
need this on Ubuntu, but one possible reason would be that some other
library in our dependencies list happens to depend on it on Ubuntu.
We fix this by moving the dependency over to the shared
VENV_DEPENDENCIES list.
Fixes part of #9946.
This will allow us to begin to add our own stubs for external
libraries. Writing stubs can be surprisingly little work to do, and
can have high leverage in keeping our type annotations high-quality.
This implements right-to-left message automatic detection support in
the compose box as well as the message feed. Full unit tests and
support in the message-editing UI are for future work (as are
potentially more fancy things like supporting things like
right-to-left multi-word names for users/streams/etc.).
Fixes#3123.
This will be helpful in the upcoming changes which will make use
of this extracted function to re-create zulip_test_template after
migrating zulip_test db so that we have latest schema in tests.
This commit fixes some modules that were erroneously left out while
transitioning app.js to webpack. This commit exposes them using
expose-loader or setting them directly to window.
This commit moves all files previously under the 'app' bundle in
the Django pipeline to being compiled by webpack under the 'app'
entry point. In the process, it moves assets under the app entry
to a file called app.js that consumes all relevant css and js files.
This commit also edits the webpack config to be able to expose certain
variables for third party libraries that are currently required by
some modules. This is bad coding form and should be refactored to
requiring whatever dependencies a module may have; we're just
deferring that to the future to simplify the series of transitions we
need to do here. The variable exposure is done using expose-loader in
webpack.
The app/index.html template is edited to override the newly introduced
'commonjs' block in the base template. This is done as a temporary
measure so as not to disrupt other pages on the app during the transition.
It also fixes the value of the 'this' context that was being inferred
as window by third party libraries. This is done using imports-loader
in the webpack config. This is also messy and probably isn't how we
want things to work long term.
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Adds search_pill.js to the static asset pipeline. The items
for search pill contain 2 keys, display_value and search_string.
Adding all the operator information i.e the operator, operand and
negated fields along with the search_string and description was tried out.
It was dropped because it didn't provide any advantage as one had to
always calculate the search_string and the description from the operator.
The appropriate name for the remote pointing at the repo we maintain
may be `upstream` for most of our repos... but not when we're
downstream of someone else, e.g. for react-native. So, make it easy
to configure per-repo.
This results in a significant optimization in the performance of
re-provisioning Zulip if all that you're doing is rebasing onto a
newer version of master (which just adds new migrations).
The change carries some risk of generating unpleasant-to-debug
situations, because if we merge a buggy migration and then later fix
it, some clients may not have a properly migrated database (and also,
this changes how populate_db commutes with migrations). But it seems
worth it, given how much time is currently wasted by not having this.
Fixes: #9512.
In this commit we are adding run_generate_fixtures_if_required,
a new function which is meant to de-duplicate a bit of code
between test-server and test-backend which is essentially
responsible for rebuilding the test database if that was required.
In this commit we are essentially just refactoring the function
is_template_database_current to be called template_database_status
and adjusting the return values accordingly.
This is essentially a preparatory commit for the upcoming commits
which will essentially enable us to not throw away entire DB and
rebuild from scratch if only running migrations could do the job.
This should avoid us creating duplicate webpack bundles every time we
do a deployment, even if none of the files in the bundles themselves
have changed at all.
This option (aka `--raw-output`) prints a string as itself, rather
than JSON-encoded; which makes it fit a bit better in a shell script,
saving us a layer of quoting.
This replaces ad4617c95 with a different fix for the same issue:
instead of stripping the `.git` off separately, we can just correct
the regex, using `+?` to fix our stepping in a classic regex pitfall.
This is a performance optimization: Rather than copying these files
into the `prod-static` directory and then deleting them, we just don't
copy them over in the first place.
For styles, it might have once been the case that this did something,
but we've moved them all to being managed by webpack some time ago.
For the js directory, I think it was never useful to copy and then
delete them; these files were always compiled via tools/minify-js,
and the raw JS files weren't needed, anyway.
This changes run-dev.py to ensure that we have in fact compiled
handlebars templates before running webpack, which is the right model.
Future work will likely include running the handlebars compiler from
webpack, and thus eliminating this extra process.
This improves the performance of these operations, by saving a ~50ms
Python process startup. While not a major performance improvement, it
seems worth it, given how often these commands get run.
Fixes#9571.
First, it's silly that these weren't in common.css in the first place,
since that meant these were a bunch of duplicated code, but
additionally, that meant that these weren't available on the
`/activity` page (or other pages that don't include the portico styles).
Fixes#9561.
Makes the i18n strings in this file much easier to translate by splitting
them into smaller chunks (which avoids having a lot of code in the tagged
strings), and adds a string that was missing as well.
We fix the issue of check-templates spitting out diff between
expected and found indentation of a file before mentioning the
error message and the file name. Basically stuff was being in the
wrong order despite the fact that in code stuff was happening in the
correct order ie, first print the error message along with the filename
and then the actual diff between expected and found file indentation.
Fixes: #9533.
A "zform" knows how to render data that follows our
schema for widget messages with form elements like
buttons and choices.
This code won't be triggered until a subsequent
server-side commit takes widget_content from
API callers such as the trivial chat bot and
creates submessages for us.
This starts the concept of a schema checker, similar to
zerver/lib/validator.py on the server. We can use this
to validate incoming data. Our server should filter most
of our incoming data, but it's useful to have client-side
checking to defend against things like upgrade
regressions (i.e. what if we change the name of the field
on the server side without updating all client uses).
I mistakenly pushed a PR when my tests failed. I ran with
the coverage option, so I saw this brightly colored summary
report that distracted me from the failure message.
This adds a couple newlines and some all caps.
The timezone environment variable was set to UTC initially. It was
changed to something other than UTC so that any local vs UTC
conversion issues will manifest in the tests.
Fixes: #5105.
We essentially stop running create_realm_internal_bots during
every provisioing and move its operations to run from populate db.
In fact to speed things up a bit we actually make populate db call the
funcs which create_realm_internal_bots calls behind the scenes.
Fixes: #9467.
This is required because the --settings=zproject.test_settings param
doesn't work with migrate or the dumpdata management commands. Thus
untill now if one ran just this tool ended up with test database not
properly setup. We never noticed this because test-backend ran this
tool again (after exporting DJANGO_SETTINGS_MODULE) thus making the
tool work this time.
I've often done this by hand -- basically typed out the last line,
with the variables found from looking at the PR page in a browser.
Seems nicer for both us maintainers and the contributor, in particular
because the PR gets marked as merged, instead of closed. But it's a
bit of a pain, and I do it maybe half the time or less; plus it's kind
of a subtle GitHub feature, and as a result I think other maintainers
of Zulip repos do this approximately never.
I've always figured this couldn't be hard to automate; today I decided
to take the 45 minutes to look up how, write out the script, QA it,
write up a nice usage message and some comments, and commit it. :)
This commit improves the output that blueslip produces while
showing error stack traces on the front-end. This is done by
using a library called error-stack-parser to format the stack
traces.
This commit also edits the webpack config to use a different
devtool setting since the previous one did not support sourcemaps
within stack traces. It also removes a plugin that was obviated
by this change.
This improves test coverage for a lot of our webhooks that relied
on ad-hoc methods to handle unexpected event types.
Note that I have deliberately skipped github_legacy, it isn't
advertised and is officially deprecated.
Also, I have refrained from making further changes to Trello, I
believe further improvements to test coverage should be covered
in separate per-webhook commits/PRs.
UnexpectedWebhookEventType is a generic exception that we may
now raise when we encounter a webhook event that is new or one
that we simply aren't aware of.
Now that we have tsearch_extras packages uploaded, this mostly works.
There's a few issues being debugged in #9460; they should be fixed
soon, and regardless, merging this will make it easier to develop.
This makes it possible to again use the *.zulipdev.com domains in the
development environment.
Ideally, we'd also read REALM_HOSTS to make this more flexible.
This adds a tour of Zulip to the bottom of the homepage.
In order to get the carousel nave, we use Bootstrap 2 from a CDN on
this page; this isn't ideal in the medium term, but upgrading
Bootstrap across the project is too much work for now.
Apparently, we were incorrectly appending each new hash onto the end
of the file, basically resulting in every run of provision being
treated as a miss for this cache.
Fixing this saves about 4s (over 1/3) of the no-op provision time.
Fixes#9233.
Uses nargs='*' instead of nargs='argparse.REMAINDER'.
nargs='argparse.REMAINDER' gathers remaining terms as arguments
even if it is an option e.g --coverage, while '*' gathers all the
command-line arguments until the next option is encountered.
The only slash command implemented in this initial
version is an extremely crippled version of a
"/stats" slash command that reports that you are
running 1 server.
This has a cool structure, but it's written against the long-dead
South API, and we can always pull it out of the Git history if we want
to use this approach in the future.
This leaves the wrapper script with very little left to do!
The main thing left is finding scripts by searching for shebang lines;
mypy itself would happily do the search for importable Python files.
Cleaned up add_user_list_args(). The "help" and
"all_users_help" have all default values. As noted in
an earlier commit, "all_users_help" is always passed in,
so we can get rid of "all_users_arg". We keep the default
for "all_users_help" so we don't have to change variable order
in function definition.