Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
This test was passing a string, not an Iterable[str], and effectively
a quirk in the remove_alert_words implementation happened to result in
processing each character in the string working.
Openapi had descriptive response codes for endpoints with multiple
responses for same response code. But this does not fall in line
with openapi specifications. So change descriptive response codes
like "400_auth" and "400_anauth" to "400_0" and "400_1" for all
such endpoints. Also make the necessary changes in openapi.py so
as to be able to read the schema in such cases and generate example
in such cases.
zulip.yaml is not in compliance with openapi specifications file.
Edit it so that it passes verification as an openapi specification
file.
Fixes#14582 .
We do limit the length to 100 in the frontend, but no such check
exists in the backend.
Check that a new alert word has a maximum length of 100 in the
alert_words endpoint.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.