We had two duplicate functions for archiving zerver_attachment_messages
rows, doing the same thing - archiving by message_id. One of them had a
redundant INNER JOIN, so we get rid of that too.
Since we loop over realms in the functions for archiving stream messages
and then personal+huddle messages, and also want to split cleaning up
attachments by realm - it makes sense to do it all in one single loop.
Rename notification property `enable_stream_sounds` to
`enable_stream_audible_notifications` to match with other
notification property patterns.
Fixes part of #12304
We batch queries that archive Messages, to limit the maximum amount of
Message objects archived in a single query. This leads to the archiving
of other related objects being batched as well, because we loop over
chunks of archived messages and archive their related objects per-chunk.
N = self.parallel templates are created, and these templates were
previously named 'zulip_test_template_<1, N>'. However, to support
running multiple instances of `test-backend`, a unique
`random_id_range_start` was created for each template database.
There was no problem prior because the templates would simply be
used again and thus did not require any clean up. Now that there are
unique database names being created, every time `test-backend` is run
these templates can accumulate on disk. Instead, we clean up our
templates at the end of every complete run of the test suite, or upon a
SIGINT.
Fixes: #12426
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
UserMessages no longer need special handling, they can be archived by
move_models_with_message_key_to_archive and automatically cleaned up
like the other models with a message key with CASCADING=True.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Since we archive attachments and attachment_messages tied to a list of
ids of Messages that we just archived (so from the current realm), it's
unnecessary to check their realm in the queries. This could potentially
cause archiving of an attachment with realm_id of another realm, but
this isn't an issue, as long as we make sure we don't end up deleting
the original Attachment object incorrectly - but realm_id check is
included in delete_expired_attachments() to ensure that.
This makes it a lot more useful for understanding how our flag update
endpoints work.
With significant edits by tabbott to explain what these are.
Fixes#12092.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We were apparently reusing the path for both the development and test
databases, which meant that we would not always correctly run
`generate_fixtures` when changes were required.
This was a recent regression introduced when we added this cache a few
days ago.
We add RETURNING to fetch relevant message and usermessage ids in
archiving queries and use them to make other queries faster and slower.
A side-effect of this implementation is that with cross-realm messages,
the UserMessage of the recipient and the Message will not be deleted -
but cross-realm messages are rare, will still get correctly put in the
archive tables and so failing to delete should not be a problem for now.
They will be fully handled later.
zerver_archivedmessage is already INNER JOIN-ed earlier in the query, so
we check the pub_date in it, instead of joining zerver_message, which
would just redundantly join the analogical rows.
lxml parser appends html and body tags to the soup object which
are not reqired. There are no other major parsing diffrences between
the two parsers as long the HTML input is perfectly formated.
lxml parser is much faster than html.parser but it hardly matters
in our case.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-
between-parsers
In addition to the "+show-sender" option, we now add "+include-footers"
which disables stripping of the footer from the email body if this token
is included in the email address.
To enable a comfortable way of adding more optional tokens in the
address (like current '+show-sender') we change decode_email_address to
return a general dictionary containing options specified through adding
these optional tokens in the To: address. For now, we only have
"+show-sender", but more can be easily added using this change.
The RealmAuditLog object ID was stored in the event sent to the
deferred_work queue as a means to update the row's extra_data field.
The extra_data field then stores the location of the export.
Instead of running `what_to_do_with_migrations` unconditionally, we
first hash and compare the files located in `*/migrations/*`. Only if
a migration file has changed (or the hash file does not exist yet) do we
call `what_to_do_with_migrations`.
It was discovered that the call to Django's `showmigrations.py` file was
causing roughly a 500ms increase in `test-backend`'s start up time.
However, this fix only saves about 100ms, apparently because a lot of
that work was importing Django dependnecies we need for most tests
anyway.
Fixes: #12428.
The payload for when a build is cancelled was causing an error
because the build result code mapping was missing one of the
codes. This commit also fixes a minor typo in the result codes.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
We add general code that will archive models that are tied to a specific
Message (such as Reactions and SubMessages). Certain details of the
model are grabbed from a list models_with_message_key, and then used to
create queries that will archive these database tables.
We put Reaction in that list in this commit, and add appropriate tests.
To have archiving of other analogical models (for example SubMessage),
one only needs to make an appropriate entry in the
models_with_message_key list.