Runs a test at the end of tools/ci/backend to check if any untracked
files have been created during this ci tests. Exits with error code 1
if untracked files are found, otherwise exits successfully.
Fixes#14691.
We recently changed our droplet setup such that their
host names no longer include zulipdev.org. This caused
a few things to break.
The particular symptom that this commit fixes is that
we were trying to server static assets from
showell:9991 instead of showell.zulipdev.org:9991,
which meant that you couldn't use the app locally.
(The server would start, but the site's pretty unusable
without static assets.)
Now we rely 100% on `dev_settings.py` to set
`EXTERNAL_HOST` for any droplet users who don't set
that var in their own environment. That allows us to
remove some essentially duplicate code in `run-dev.py`.
We also set `IS_DEV_DROPLET` explicitly, so that other
code doesn't have to make inferences or duplicate
logic to detemine whether we're a droplet or not.
And then in `settings.py` we use `IS_DEV_DROPLET` to
know that we can use a prod-like method of calculating
`STATIC_URL`, instead of hard coding `localhost`.
We may want to iterate on this further--this was
sort of a quick fix to get droplets functional again.
It's possible we can re-configure droplets to have
folks get reasonable `EXTERNAL_HOST` settings in their
bash profiles, or something like that, although that
may have its own tradeoffs.
Avatar images for bots used by the tool to generate integration
documentation screenshots are pre-generated and committed to the repository.
The `generate-integration-docs-screenshot` tool now uses these images,
instead of trying to create these avatar images on the fly.
Also, deleted the unused `create_png_from_svg` function.
We now have helpers for the two places where
we create databases.
There was already one helper in place, and
I gave it a more concrete name, to match
its actual database name in postgres.
So `generate-fixtures` only ever did 9 lines of code (really
3 lines of actual code) in its normal mode of operation.
But it was cluttered with lots of stuff that really only
happened when you called it with the `-force` option, which
was only invoked by `rebuild-test-database`.
Now we inline most of the code into `rebuild-test-database`.
And now `generate-fixtures` is simple (and doesn't support
a `-force` flag.
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
The new tools now have more concise, more parallel names:
- rebuild-dev-database
- rebuild-test-database
The actual implementations are still pretty different:
rebuild-dev-database:
mostly delegates to 5 management scripts
rebuild-test-database:
is a very thin wrapper for generate-fixtures
We'll try to clean that up a bit soon.
This tool was part of a very ad hoc investigation
during 2017 into our JS dep dependencies.
It's very out of date, and it has a non-trivial
maintenance cost, as these type of tools seem
to come up in every code sweep.
Commit 35577a1f66 (#9406) moved the
OpenAPI definitions to zerver/openapi, and this script was not
updated, so it has been validating nothing for two years.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Restart postgres service if provision is called in production test suite.
This is required because terminate-psql-sessions script (used
in tools/ci/setup-production) throws error if postgres service is not running.
Restart rabbitmq service if provision is called in production test suite.
This is done to start the node as Circle CI don't start services on installation.
Removed memcached restart as flush-memcached script (which is furthur
used in tools/ci/production) throws UNKNOWN READ FAILURE if memcached is restarted
in development.
Since now we want to use production suites on Circle CI so there
is no need to set TRAVIS in env while running scripts.
CIRCLECI is set default in the enviroment of Circle CI builds
so we can use it directly.
Also Travis CI had rabbitmq-server installed so we had to add workaround
in install script to avoid the error. That workaround is removed.
Used postgres 10 inplace of postgres 9.5 as it is used in Bionic.
Upgraded nginx version in success-http-headers which is in CircleCI
Bionic enviroment.
This seems to be a complicated wrapper around `sed`.
It's not used anywhere or documented anywhere. I
assume it was used in some code sweep or something.
See cec45b0ae5
I built this in 2013 to help me quickly build
some sample data for our very early node tests,
I believe.
Ever since then it's just been changed for various
code sweeps. Also, if we wanted to resurrect this
idea for some reason, we now have the template
parser that was written since then.
This tool was last modified in 2016, has no
documentation, and requires `dot` to run, so
I think we can remove it to reduce some clutter
in the tools directory.
This tool is hackish and incomplete, but it's still
worth looking at if somebody wants to hunt down
obsolete CSS. There are probably better ways to
tackle this problem, so we should eventually just
remove this tool, but it's pretty low maintenance.
Current output looks like this:
$ ./tools/find-unused-css
actions_hovered
actions_link
bookend_tr
btn-skip
company-name
flatpickr-months
label_for_text
loading_more_messages_indicator_box
loading_more_messages_indicator_box_container
logoimage
messages-collapse
messages-expand
numInputWrapper
page_loading_indicator_box
page_loading_indicator_box_container
skinny-user-gravatar
sp-input
summary_colorblock
summary_row
summary_row_private_message
tutorial-done-button
Else for each retry the duplicate commits would be removed
again and again from the contributor's zulip/zulip commits.
This is a bug in the original commit
6bed6ccdcf that added the
functionality to remove duplicate commits.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
There's no real reason to do the lazy import any
more, as we use this unconditionally inside `main`
(indirectly), and `provision_inner` runs after we
have set up the venv.
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
Importing cairosvg in fails in production, because `libgtk-3-dev` is not
available. This commit moves the import for `cairosvg` into the function
where it is used. This code will soon be moved into a separate script that
will not be run on production.
This guarantees that we don’t accidentally upgrade one without the
other, which could happen for example due to different third-party
version constraints between the two.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We no longer need to maintain duplicate code
related to where we set up the emoji
cache directory.
And we no longer need two extra steps for
people doing advanced (i.e. manual) setup.
There was no clear benefit to having provision
build the cache directory for `build_emoji`,
when it was easy to make `build_emoji` more
self-sufficient. The `build_emoji` tool
was already importing the library that has
`run_as_root`, and it was already responsible
for 99% of the create-directory kind of tasks.
(We always call `build_emoji` unconditionally from
`provision`, so there's no rationale in terms
of avoiding startup time or something.)
ASIDE:
Its not completely clear to me why we need
to put this directory in "/srv", instead of
somewhere more local (like we already do for
Travis), but maybe it's just to be like
its siblings in "/srv":
node_modules
yarn.lock
zulip-emoji-cache
zulip-npm-cache
zulip-py3-venv
zulip-thumbor-venv
zulip-venv-cache
zulip-yarn
I guess the caches that we keep in var are
dev-only, although I think some of what's under
`zulip-emoji-cache` is also dev-only in nature?
./var/webpack-cache
./var/mypy-cache
In `docs/subsystems/emoji.md` we say this:
```
The `build_emoji` tool generates the set of files under
`static/generated/emoji` (or really, it generates the
`/srv/zulip-emoji-cache/<sha1>/emoji` tree, and
`static/generated/emoji` is a symlink to that tree;we do this in
order to cache old versions to make provisioning and production
deployments super fast in the common case that we haven't changed the
emoji tooling). [...]
```
I don't really understand that rationale for the development
case, since `static/generated` is as much ignored by `git` as
'/srv' is, without the complications of needing `sudo` to create it.
And in production, I'm not sure how much time we're really saving,
as it takes me about 1.4s to fully rebuild the cache in dev, not to
mention we're taking on upgrade risk by sharing files between versions.
So, `source_emoji_dump` is not the greatest variable
name, but at least we now define it relative to its
parent instead of the `.success-stamp` file. (And
then `success_stamp` is just another join.)
If the directory `templates/zerver/emails/compiled/`
is missing, then we need to run `inline_email_css`
again.
This can happen if somebody gets overzealous about
cleaning untracked files.
This is more encapsulated and more efficient.
In the cases where `is_force` is `True` or
`pygments_data.json` is missing, we now avoid
the unnecessary step of importing `pygments`, at
least up front.
(Of course, we probably import that once we generate
the artifacts.)
If somebody is having issues with provision, it's
plausible they'll do something like `git clean -fX`
to clean up old artifacts of earlier provision runs,
as part of debugging things.
We defend against this by detecting the most obvious
symptom as cheaply as possible.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
We change the message for skipping RabbitMQ
configuration to match nearby messages:
No need to run `tools/setup/build_pygments_data`.
No need to run `scripts/setup/inline_email_css.py`.
No need to run `scripts/setup/configure-rabbitmq.
No need to regenerate the dev DB.
No need to regenerate the test DB.
No need to run `manage.py compilemessages`.
For upgrade-zulip-from-git to work, we need to be able to run
update-prod-static on production systems, which means provision code
like this cairosvg logic needs to be there for now.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
We figure out the dev host using the same logic as
dev_settings.py, so that we don't use wrong things
like 127.0.0.1 for droplet users.
And we display the link in cyan.
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
If a request fails the tool sleeps for some time before making
further requests. The sleep time is a random number between
0 and 2^failures capped at 64 seconds. More details about the
algorithm can be found at https://chat.zulip.org/#narrow/stream/
92-learning/topic/exponential.20backoff.20--.20with.20jitter
This is a prep commit for generating /team page data
using cron job. zerver/tests directory is not present in
production installation. So moving the file from the directory
tests to tools.
As described in the commit that added this function, this fixes one
quite annoying bug and one at least in-principle bug:
* On Windows, the simple version (lacking `git update-index
--refresh`) routinely gives false positives, making the tools
that rely on it basically unusable.
* If you have uncommitted changes in the index but manage to have
the worktree nevevertheless match HEAD, the simple version will
give a false negative and we'd blow away those changes.
This is verbatim from Git upstream, at an older version. (The one
change since then is to add localization for the messages like "You
have unstaged changes" -- which complicates the code, is important and
worth it for Git itself, but for our tools we can do without.)
This function will replace our use of `git diff-index --quiet HEAD`
in several scripts. The key differences in behavior are:
* The `git update-index --refresh`. Without this, on Windows
apparently `git diff-index` routinely (but not all the time!)
reports that tons of files have changed. See report:
https://chat.zulip.org/#narrow/stream/9-issues/topic/.2E.2Ftools.2Ffetch-pull-request.20issue/near/834435
* Instead of one command comparing the worktree to HEAD, we
separately compare the worktree to the index and the index to
HEAD, and abort if either diff is nonempty. This one is obvious,
but rather an edge case (it matters only if you've managed to
make the worktree and HEAD agree while the index has some
changes), and the extra code is annoying if written out in every
script that needs it. But that's what a subroutine is for. :-)
We'll make a few tweaks before actually switching to use this.
The Git commands we're invoking to do the real work are useful to
print, for transparency to see what's happening and that there's no
magic here.
The boring shell stuff like `remote=${2:-"upstream"}` is not so
helpful, and nor is the rather arcane and in any case read-only
command `git diff-index --quiet HEAD`. Those only add noise that
obscures the interesting parts. So, move the `set -x` down to when
we're done with the boring preparatory stuff and ready to perform
the commands that do the work.