BACKEND_DATABASE_TEMPLATE was introduced in a507a47778.
This setting is only available for the test cases and it is not that
necessary to have it configurable.
We define it as a global variable in zerver.lib.test_fixtures.
This avoids requiring mypy_django_plugin to know the type of
settings.BACKEND_DATABASE_TEMPLATE for type checking purposes, given the fact
that settings.test_extra_settings is not available in production/development
setup.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
When running some tests multiple times in the same call,
were failing because of the data duplication.
This commit resolves that issue by resetting the test
environment (i.e: Re-cloning test database and clearing
cache) after each run.
Fixes#17607.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This will give help up write new digest only if the db rebuild
succeeds. We were relying on the caller to
be successful in building db, this was hacky and unreliable.
We write new db digest once the caller succeeds, this ensures
that we write new digest after every successful attempt.
This fixes the anomality we were facing that Databases were rebuild
on the 2nd provision attempt with no changes to files or migrations.
This was happening because we didn't write a new digest for db
after the first provision (The case of DB didn't exist).
During the 1st provision, we check the template_status() of
Database both Dev and Test, but database_exists() of Databases
obviously returned false, and we rebuild the database,
but forgot to write_new_digest and hence the anomaly in the
second provision explained above.
This ensures that if one deletes `zproject/dev-secrets.conf`, we end
up rebuilding the databases from scratch (which, critically, will
ensure the password that gets setup matches what's in the current
version of the configuration file).
This should address a category of issue we've had where deleting
`zproject/dev-secrets.conf` would result in provision failing.
We have two different digest schemes to make
sure we keep the database up to date. There
is the migration digest, which is NOT in the
scope of this commit, and which already
used the mechanism we use for other tools.
Here we are talking about the digest for
important files like `populate_db.py`.
Now our scheme is more consistent with how we
check file changes for other tools (as
well as the aformentioned migration files).
And we only write one hash file, instead of
seven.
And we only write the file when things have
actually changed.
And we are explicit about side effects.
Finally, we include a couple new bot settings
in the digest:
INTERNAL_BOTS
DISABLED_REALM_INTERNAL_BOTS
NOTE: This will require a one-time transition,
where we rebuild both databases (dev/test).
It takes a little over two minutes for me,
so it's not super painful.
I bump the provision version here, even
though you don't technically need it (since
the relevant tools are actually using the
digest files to determine if they need to
rebuild the database). I figure it's just
good to explicitly make this commit trigger
a provision, and the user will then see
the one-time migration of the hash files
with a little bit less of a surprise.
And I do a major bump, not a minor bump,
because when we go in the reverse direction,
the old code will have to rebuild the
database due to the legacy hash files not
being around, so, again, I just prefer it
to be explicit.
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
Somewhat confusingly, we have two types of different
digests related to databases. The migration digests
are pragmatic, since changes to migrations are a bit
more frequent for certain use cases and don't
necessitate a complete rebuild of the database.
Anyway, these are just more specific names.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
1) Created a new class `DatabaseType` and access its objects inside
`template_database_status()` instead of sending five arguments with
default values.
2) Made `check_files` and `setting_name` local variables instead of
function parameters since they had same value(None) for every call.
Fixes#13845.
Apparently, the arguments passed to template_database_status were
incorrect for the manual testing development database, in that we
didn't pass a status_dir when calling into that code from provision.
The result was that provisioning before running `test-backend` would
ignore changes to the list of check_files (etc.) made after rebasing,
and vice versa.
The cleanest fix is to compute status_dir from other values passed in;
I'm also going to open a follow-up issue for creating a better overall
interface here.
This should ensure that folks rebasing past this commit from an older
database model get their database rebuilt in the way that will
match the test_subs.py query count of 40.
For new user onboarding, it's important for it to be easy to verify
that Zulip's mobile push notifications work without jumping through
hoops or potentially making mistakes. For that reason, it makes sense
to toggle the notification defaults for new users to the more
aggressive mode (ignoring whether the user is currently actively
online); they can set the more subtle mode if they find that the
notifications are annoying.