We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
There's no real reason to do the lazy import any
more, as we use this unconditionally inside `main`
(indirectly), and `provision_inner` runs after we
have set up the venv.
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We no longer need to maintain duplicate code
related to where we set up the emoji
cache directory.
And we no longer need two extra steps for
people doing advanced (i.e. manual) setup.
There was no clear benefit to having provision
build the cache directory for `build_emoji`,
when it was easy to make `build_emoji` more
self-sufficient. The `build_emoji` tool
was already importing the library that has
`run_as_root`, and it was already responsible
for 99% of the create-directory kind of tasks.
(We always call `build_emoji` unconditionally from
`provision`, so there's no rationale in terms
of avoiding startup time or something.)
ASIDE:
Its not completely clear to me why we need
to put this directory in "/srv", instead of
somewhere more local (like we already do for
Travis), but maybe it's just to be like
its siblings in "/srv":
node_modules
yarn.lock
zulip-emoji-cache
zulip-npm-cache
zulip-py3-venv
zulip-thumbor-venv
zulip-venv-cache
zulip-yarn
I guess the caches that we keep in var are
dev-only, although I think some of what's under
`zulip-emoji-cache` is also dev-only in nature?
./var/webpack-cache
./var/mypy-cache
In `docs/subsystems/emoji.md` we say this:
```
The `build_emoji` tool generates the set of files under
`static/generated/emoji` (or really, it generates the
`/srv/zulip-emoji-cache/<sha1>/emoji` tree, and
`static/generated/emoji` is a symlink to that tree;we do this in
order to cache old versions to make provisioning and production
deployments super fast in the common case that we haven't changed the
emoji tooling). [...]
```
I don't really understand that rationale for the development
case, since `static/generated` is as much ignored by `git` as
'/srv' is, without the complications of needing `sudo` to create it.
And in production, I'm not sure how much time we're really saving,
as it takes me about 1.4s to fully rebuild the cache in dev, not to
mention we're taking on upgrade risk by sharing files between versions.
If the directory `templates/zerver/emails/compiled/`
is missing, then we need to run `inline_email_css`
again.
This can happen if somebody gets overzealous about
cleaning untracked files.
This is more encapsulated and more efficient.
In the cases where `is_force` is `True` or
`pygments_data.json` is missing, we now avoid
the unnecessary step of importing `pygments`, at
least up front.
(Of course, we probably import that once we generate
the artifacts.)
If somebody is having issues with provision, it's
plausible they'll do something like `git clean -fX`
to clean up old artifacts of earlier provision runs,
as part of debugging things.
We defend against this by detecting the most obvious
symptom as cheaply as possible.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
We change the message for skipping RabbitMQ
configuration to match nearby messages:
No need to run `tools/setup/build_pygments_data`.
No need to run `scripts/setup/inline_email_css.py`.
No need to run `scripts/setup/configure-rabbitmq.
No need to regenerate the dev DB.
No need to regenerate the test DB.
No need to run `manage.py compilemessages`.
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is verbatim from Git upstream, at an older version. (The one
change since then is to add localization for the messages like "You
have unstaged changes" -- which complicates the code, is important and
worth it for Git itself, but for our tools we can do without.)
This function will replace our use of `git diff-index --quiet HEAD`
in several scripts. The key differences in behavior are:
* The `git update-index --refresh`. Without this, on Windows
apparently `git diff-index` routinely (but not all the time!)
reports that tons of files have changed. See report:
https://chat.zulip.org/#narrow/stream/9-issues/topic/.2E.2Ftools.2Ffetch-pull-request.20issue/near/834435
* Instead of one command comparing the worktree to HEAD, we
separately compare the worktree to the index and the index to
HEAD, and abort if either diff is nonempty. This one is obvious,
but rather an edge case (it matters only if you've managed to
make the worktree and HEAD agree while the index has some
changes), and the extra code is annoying if written out in every
script that needs it. But that's what a subroutine is for. :-)
We'll make a few tweaks before actually switching to use this.
Add sgrep (sgrep.dev) to tooling and include simple rule as
proof of concept. Included rule detects use of old django render
function.
Also added a rule that looks for if-else statements where both
code paths are identical.
Folks can have issues connecting to Casper
as zulipdev.com when they are not connected to
the internet or just have a bad connection, since
the DNS record is on the internet. Folks can
work around this by just creating an /etc/hosts
entry for zulipdev.com, but people don't always
know.
This fix moves the symptom slightly earlier in
the process--we don't advertise that the server
is "up" if you can't actually connect to it as
"zulipdev.com".
In the past it has blocked Python library security updates with overly
strict version bounds, and we don’t use it as a library, only as a
binary.
Skip the PROVISION_VERSION bump because we can use the tx binary from
either location.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previously, we only did apt updates when our sources.list files or
keys changed, which could result in provisioning errors for
development systems that don't routinely update their apt cache
(probably including ~all Vagrant environments).
Used get_venv_dependencies function to return the correct dependencies
for RHEL, Centos, Fedora rather than importing them as separate
COMMON_YUM_DEPENDENCIES in provision and create-production-venv.
Added a get_venv_dependencies() function in setup_venv.py which
returns VENV_DEPENDENCIES according to the vendor and os_version.
The reason for adding this function was because python-dev will be
depreciated in Focal but can be used as python2-dev so when adding
support for Focal VENV_DEPENDENCIES should to be os_version dependent.
There were two problems with the previous code-
1) The code glob.glob("scripts/lib/build-") should be
glob.glob("scripts/lib/build-*) otherwise it would always return [].
2) The part of the code where we included scripts/lib/build-* for sha1 sum
check would only run when debian is not in os_families(). This wasn't
correct as we could have a situation where we have to build pgroonga
from source even in case of debian and so we need to improve the
condition on it.
Now since we only have build-pgroonga there its better to just directly hash
its content with the condition of BUILD_PGROONGA_FROM_SOURCE.
While it's a bit of extra complexity to do this check, which I'm not
excited about, we've had multiple folks spend significant time being
confused rebasing past d7d8632525 into
deleting `pygments_data.json`, with provision not rebuilding it, so
this seems worth merging as a transitional fix even if we decide to
remove it in 2 months.
1) Created a new class `DatabaseType` and access its objects inside
`template_database_status()` instead of sending five arguments with
default values.
2) Made `check_files` and `setting_name` local variables instead of
function parameters since they had same value(None) for every call.
Fixes#13845.
This adds Ubuntu 19.10 as a valid provisioning target.
The release test in setup-apt-repo was changed from a list of values to
a regex check for brevity.
In this commit, we basically match any kinda of jinja2 start tag,
no matter its special kind (eg. jinja2_whitespace_stripped_start)
to any kinda jinja2 end tag (eg. jinja2_whitespace_stripped_end)
Idea is special operators like `-` do not change the meaning of
inline tag and thus matching shouldn't depend upon this.
I added this tool a few years ago, and I did have
a vision for how it would improve our codebase, but
I can't remember exactly where I was going with it.
At this point the tool is just a little too noisy
to be helpful. An example of it creating confusion
was a recent PR where somebody was patching
user_circle_class in the PM list, and we already
had similar code in the buddy list, because they
use the same CSS. I mean, there was possibly a way
that the code could have been structured to remove
some of the duplication, but it probably would have
just moved the complexity around.
I just don't think it's worth maintaining the tool
at this point.
This experimental setting disables sending private messages in Zulip
in a crude way (i.e. users get an error when they try to send one).
It makes no effort to adjust the UI to avoid advertising the idea of
sending private messages.
Fixes#6617.
We'll be soon documenting a production workflow that involves using
it, and that means it needs to live under scripts/ (since tools/ isn't
present in release tarballs).
To replace DISTRIB_FAMILY, there’s now an os_families function using
the standard ID and ID_LIKE information in /etc/os-release.
Fixes#13070; fixes#13071.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We no longer use tsearch_extras, and the camo patch is irrelevant on
systemd systems (Xenial and newer). So we no longer need to
provide/install a PPA at all.
Closes#13027.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Now that we're implemented tsearch_extras in pure postgres, we no
longer need a custom extension. This should help us considerably, as
it means we no longer need to ship custom apt packages at all.
Fixes#467.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
When we add Plus, the first sentence should change to "Available on Zulip
Standard and Plus".
I copied the styling of .tip out of expediency, but it's also possible that
long term we'll want only 1 tip-like box styling.
The hover styling is a bit random, but I tried to copy other hover styles I
found in settings.scss.
Note that this renames .upgrade_realm_plan_type_suggestion to .upgrade-tip.
Mismatching imports from outside and inside the virtualenv in the same
process was causing segfaults after apparently benign changes to the
script!
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
`valid_indent_html` allows for replacing the incorrectly
indented file with the correct pretty-printed version
if `--fix` is passed to `tools/lint`.
Fixes#12641.
As part of dropping support, we add appropriate error messaging when a
user attempts to provision while using trusty. If the user is running
in Vagrant we append information on how to proceed.
set iteration order is randomized in Python ≥ 3.3. That might or
might not have had the potential for causing rare probabilistic bugs,
but if nothing else, it made build logs harder to compare.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
A function was written in `test_fixtures.py` to drop a test database
template if the corresponding database id doesn't belong to a file.
Alongside this fact, every file that is written is removed after 60
minutes. Meaning any potential database template can never exist
longer than one hour.
This follow-up work was added to deal with the potential race
conditions when running `test-backend`. Ensuring that all templates
are properly dealt with.
Essentially rewritten by tabbott for cleanliness.
Fixes the remainder of #12426.
The test-backend parallel test runner system doesn't actually use the
zulip_test database; instead, it creates its own databases off the
zulip_test_template database.
We were accidentally running `tools/generate_fixtures` even when there
are no changes, because this function is shared with the
tools/lib/test_server.py codebase, which needs us to do the work of
creating a test database for it off the zulip_test_template database.
Fixing this saves about 1.5s / 4s of the runtime of a single test.