With the S3 file upload backend, we don't store uploads locally, so
the `uploads` directory in the backup will be empty, and more
importantly, LOCAL_UPLOADS_DIR will be None, which the previous code
crashed on.
N = self.parallel templates are created, and these templates were
previously named 'zulip_test_template_<1, N>'. However, to support
running multiple instances of `test-backend`, a unique
`random_id_range_start` was created for each template database.
There was no problem prior because the templates would simply be
used again and thus did not require any clean up. Now that there are
unique database names being created, every time `test-backend` is run
these templates can accumulate on disk. Instead, we clean up our
templates at the end of every complete run of the test suite, or upon a
SIGINT.
Fixes: #12426
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
UserMessages no longer need special handling, they can be archived by
move_models_with_message_key_to_archive and automatically cleaned up
like the other models with a message key with CASCADING=True.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Since we archive attachments and attachment_messages tied to a list of
ids of Messages that we just archived (so from the current realm), it's
unnecessary to check their realm in the queries. This could potentially
cause archiving of an attachment with realm_id of another realm, but
this isn't an issue, as long as we make sure we don't end up deleting
the original Attachment object incorrectly - but realm_id check is
included in delete_expired_attachments() to ensure that.
API changes:
* The behaviour of Date.toLocaleTimeString() reverts to pre 8.0.0,
this only affects automated tests. Lots of other API changes but
we didn't use any of those.
* The internal sorting algorithm changed which causes one of our own
compare function to miss coverage.
Due to additional nesting added in reactions.scss, night-mode styles
were prioritized lower than the original rules defined.
Fixes regressions introduced by changes in PR #12473
Our priority hierarchy is:
(1) Tornado and base services like memcached, redis, etc.
(2) Django and message sender queue workers.
(3) Everything else.
Ideally, we'd have something a bit more fine-grained (e.g. some queue
workers are potentially in the sending path, while others aren't), but
this should have a big impact on ensuring Tornado gets the resources
it needs during load spikes.
I think this has a good chance of causing some load spikes that would
previously have resulted in a user-facing delivery delays no longer
having any significant user-facing impact.
That we are working to fix the caveats is implied by the (beta) label.
More generally, for /help articles, explanations, apologies, etc can go in a
section at the top, but the rest of the text should be a straightforward
description of the current state.
We're not sure this feature is the best solution to this category of
problem, in that use of this feature might cause spam to stick around
longer, vs features that encourage immediate deletion.
This provides a better entrypoint for developers to learn about
internationalization in Zulip without cluttering the article for
translators.
I also took the opportunity to add a proper for-developers
introduction, including a link to the very nice EdX guide on the
topic.
Profiling shows that using cache-loader saves ~6-7 seconds of time take
by webpack-dev-server on subsequent runs. The overhaul this adds when
nothing is cached (when running first time) is around 1-2 seconds. We don't
use cache loader for ts-loader since webpack docs says it will slow it down
and file-loader since it just copies files over and caching it would just
was disk space.
This is the second merge of this commit. It fixes the issue with the
previous one by placingn cache-loader after mini-css-loader because it
just extracts css and caching that will make file-loader not run which
in turn makes developement enviorment break.
This makes it a lot more useful for understanding how our flag update
endpoints work.
With significant edits by tabbott to explain what these are.
Fixes#12092.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We were apparently reusing the path for both the development and test
databases, which meant that we would not always correctly run
`generate_fixtures` when changes were required.
This was a recent regression introduced when we added this cache a few
days ago.