Namely, here we add the "plan_includes_wide_organization_logo" and
"upgrade_text_for_wide_organization_logo" to the page_params (which
is set in zerver/lib/events.py).
"plan_includes_wide_organization_logo" is True if the plan is not of
the Realm.LIMITED type. We need to add this extra boolean parameter
instead of just using "realm_plan_type" to make things a lot easier
to work with on the frontend side, especially considering that
handlebars won't allow checking for equality in its {{#if}} blocks.
When a realm's plan type is updated using "do_change_plan_type" we
notify active users of the realm. This way certain plan features
could be enabled instantaneously for active users.
A function was written in `test_fixtures.py` to drop a test database
template if the corresponding database id doesn't belong to a file.
Alongside this fact, every file that is written is removed after 60
minutes. Meaning any potential database template can never exist
longer than one hour.
This follow-up work was added to deal with the potential race
conditions when running `test-backend`. Ensuring that all templates
are properly dealt with.
Essentially rewritten by tabbott for cleanliness.
Fixes the remainder of #12426.
The ids that will be used for each particular run of the test suite are
written to a unique file. Each file will then be used as a time
reference of when the suite was ran.
This change sets up the ability for a complete clean up of potentially
leaked database templates.
Tweaked by tabbott to remove these files after successful database
cleanup.
When running the test-backend suite in serial mode, `destroy_test_db`
double appends the database id number to the template if passed an
argument for `number`. The comment here explains this behavior.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
Rather than relying on the CASCADING property of the ForeignKey to the
Message table to clean up these objects, we delete them in the same
query as we archive them - since it's guaranteed that any of these
objects that we archive will be deleted due to their Message being
deleted later.
We don't have this guarantee for Attachment objects, which is why we
can't apply this scheme to them.
To ensure the database retains a consistent state if archiving gets
interrupted, we process each Messages chunk together with related
objects in a single atomic transaction.
We had two duplicate functions for archiving zerver_attachment_messages
rows, doing the same thing - archiving by message_id. One of them had a
redundant INNER JOIN, so we get rid of that too.
Since we loop over realms in the functions for archiving stream messages
and then personal+huddle messages, and also want to split cleaning up
attachments by realm - it makes sense to do it all in one single loop.
Rename notification property `enable_stream_sounds` to
`enable_stream_audible_notifications` to match with other
notification property patterns.
Fixes part of #12304
We batch queries that archive Messages, to limit the maximum amount of
Message objects archived in a single query. This leads to the archiving
of other related objects being batched as well, because we loop over
chunks of archived messages and archive their related objects per-chunk.
N = self.parallel templates are created, and these templates were
previously named 'zulip_test_template_<1, N>'. However, to support
running multiple instances of `test-backend`, a unique
`random_id_range_start` was created for each template database.
There was no problem prior because the templates would simply be
used again and thus did not require any clean up. Now that there are
unique database names being created, every time `test-backend` is run
these templates can accumulate on disk. Instead, we clean up our
templates at the end of every complete run of the test suite, or upon a
SIGINT.
Fixes: #12426
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
UserMessages no longer need special handling, they can be archived by
move_models_with_message_key_to_archive and automatically cleaned up
like the other models with a message key with CASCADING=True.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Since we archive attachments and attachment_messages tied to a list of
ids of Messages that we just archived (so from the current realm), it's
unnecessary to check their realm in the queries. This could potentially
cause archiving of an attachment with realm_id of another realm, but
this isn't an issue, as long as we make sure we don't end up deleting
the original Attachment object incorrectly - but realm_id check is
included in delete_expired_attachments() to ensure that.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We were apparently reusing the path for both the development and test
databases, which meant that we would not always correctly run
`generate_fixtures` when changes were required.
This was a recent regression introduced when we added this cache a few
days ago.
We add RETURNING to fetch relevant message and usermessage ids in
archiving queries and use them to make other queries faster and slower.
A side-effect of this implementation is that with cross-realm messages,
the UserMessage of the recipient and the Message will not be deleted -
but cross-realm messages are rare, will still get correctly put in the
archive tables and so failing to delete should not be a problem for now.
They will be fully handled later.
zerver_archivedmessage is already INNER JOIN-ed earlier in the query, so
we check the pub_date in it, instead of joining zerver_message, which
would just redundantly join the analogical rows.
lxml parser appends html and body tags to the soup object which
are not reqired. There are no other major parsing diffrences between
the two parsers as long the HTML input is perfectly formated.
lxml parser is much faster than html.parser but it hardly matters
in our case.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-
between-parsers
In addition to the "+show-sender" option, we now add "+include-footers"
which disables stripping of the footer from the email body if this token
is included in the email address.
To enable a comfortable way of adding more optional tokens in the
address (like current '+show-sender') we change decode_email_address to
return a general dictionary containing options specified through adding
these optional tokens in the To: address. For now, we only have
"+show-sender", but more can be easily added using this change.
Instead of running `what_to_do_with_migrations` unconditionally, we
first hash and compare the files located in `*/migrations/*`. Only if
a migration file has changed (or the hash file does not exist yet) do we
call `what_to_do_with_migrations`.
It was discovered that the call to Django's `showmigrations.py` file was
causing roughly a 500ms increase in `test-backend`'s start up time.
However, this fix only saves about 100ms, apparently because a lot of
that work was importing Django dependnecies we need for most tests
anyway.
Fixes: #12428.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
We add general code that will archive models that are tied to a specific
Message (such as Reactions and SubMessages). Certain details of the
model are grabbed from a list models_with_message_key, and then used to
create queries that will archive these database tables.
We put Reaction in that list in this commit, and add appropriate tests.
To have archiving of other analogical models (for example SubMessage),
one only needs to make an appropriate entry in the
models_with_message_key list.
Sometimes it's useful to run two copies of test-backend at the same
time. The problem with doing so is that we need to make sure no two
threads are using the same test database ID.
Previously, this worked only if at most one of those copies was
running in the single-threaded mode, because we used a random database
ID for the single-threaded code path, but the same IDs counting from 0
for the parallel code path.
Fix this, mostly, by generating a random start for the range of IDs
used by the process, and then counting off database IDs starting from
there (both in the parallel and non-paralllel modes).
There's still a very low probability race, see the TODO.
Additionally, there appear to be some other races with running two
copies of test-backend at the same time not related to the database.
See https://github.com/zulip/zulip/issues/12426 for a follow-up issue
that's sorta created by this.
The test-backend parallel test runner system doesn't actually use the
zulip_test database; instead, it creates its own databases off the
zulip_test_template database.
We were accidentally running `tools/generate_fixtures` even when there
are no changes, because this function is shared with the
tools/lib/test_server.py codebase, which needs us to do the work of
creating a test database for it off the zulip_test_template database.
Fixing this saves about 1.5s / 4s of the runtime of a single test.
Previously, if you exported a Zulip organization and then re-imported
it, we'd end up renumbering the user IDs and all direct foreign key
references to them in the database, but not the data-user-id
references in mentions. Fix this by parsing the message content and
doing that renumbering.
(Because we import raw markdown, not HTML, from third-party tools,
these changes won't affect data import from slack etc.)
Fixes the high-priority part of #11293.
Modifies the dict with the user info to include the key `bot_owner_id`
so it can be displayed in the user info popover.
Tests concerned with changing bot owner have been modified to have
number of events=2 because while updating the bot info, two events
are fired -- updating the `realm_bot` and `realm_user` since the
key `bot_owner_id` is a part of realm user info.
A unique path was created using the `LOCAL_UPLOADS_DIR` backend, similar
to the code used in `LocalUploadBackend`. The exported tarball was
copied to the directory, and an nginx url was created to serve the file
publicly.
Tweaked by tabbott to output an actual URL.
This cleans up the pattern for how we check which user is logged in
during Zulip's backend unit tests to be much more readable (replacing
the arcane session code that does this check).
We split archive_messages code into two functions: moving to archive and
cleanup. This allows cleaning up the tests - they can call
these functions directly instead of copying several lines of
archive_messages here and there in multiple tests.
This is probably a good idea for the production use case, since then
there's some consistency of behavior, and if we extend logging, one
knows exactly which realms were or were not executed before a logged
failure.
This fixes the nondeterministic test failures we've been seeing in CI:
if you use `-id` in that order_by, it happens consistently.
The upload option will no longer be limited to strictly S3 uploads. This
commit serves as a preliminary step for supporting LOCAL_UPLOADS_DIR as
part of the public only export feature.
log_and_report and its helper functions were mostly old code no longer
well adapted to how email mirror works currently, as well as having no
test coverage. We rewrite this part of the email to report errors in a
similar manner, and add tests for it. We're able to get rid of the
clunky and now useless debug_info dictionary in process message, as
log_and_report only needs the recipient email in its third argument.
The only place in which process_stream_message used debug_info was to
set the 'stream' key, which would only be used if ZulipEmailForwardError
was raised after this line in the code - which is impossible, because after
that line only send_zulip (which doesnt raise this exception) and
logger.info get called, then process_stream_message successfully returns
and then process_message succesfully returns as well. So this debug_info
code wasn't doing anything. We remove it.
Clients won't have access to user email addresses, and thus won't be
able to compute gravatars.
The tests for this are a bit messy, in large part because our tests
for get_events call subsections of it, rather than the main function.
This is a very old commit for #106, which has been on hiatus for a few
years. It was significantly modified by tabbott to:
* Improve coding style and variable names
* Update mypy annotations style
* Clean up the testing logic
* Update for API changes elsewhere in our system
But the actual runtime code is essentially unmodified from the
original work by Kirill.
It contains basic support for archiving Messages, UserMessages, and
Attachments with a nice test suite. It's still not usable in
production (e.g. it will probably break Reactions, SubMessages, etc.),
but upcoming commits will address that.
This is handy for code that needs to do something with the sent
message. We need it for a retention policy code path, but it seems
likely we'll use it a lot down the line.
This lets us handle directly in our tooling the user experience that
we document for exporting a realm with member consent (before, it
required unpleasant manual work).
We may be successfully able to get the page once, to get the content type, but
the server or network may go down and cause problems when fetching the page for
parsing its meta tags.
Currently, we only show previews for URLs which are HTML pages, which could
contain other media. We don't show previews for links to non-HTML pages, like
pdf documents or audio/video files. To verify that the URL posted is an HTML
page, we verify the content-type of the page, either using server headers or by
sniffing the content.
Closes#8358
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.
Digest emails were disabled for soft deactivated users, since UserMessage
objects are created for such users lazily when they return.
We now compute the message list for gathering hot conversations by looking at
all the messages sent to the streams where the user is subscribed, while they
were subscribed.
Fixes#6297
If the text part of an email message didn't specify the charset in the
Content-Type header, the text content wouldn't be found. We fix this, by
assuming us-ascii charset in those cases, as specified by RFC6657:
https://tools.ietf.org/html/rfc6657
This commit migrates the Subscription's notification fields from a
BooleanField to a NullBooleanField where a value of None means to
inherit the value from user's profile.
Also includes a migrations to set the corresponding settings to None
if they match the user profile's values. This migration helps us in
getting rid of the weird "Apply to all" widget that we offered on
subscription settings page.
The mobile apps can't handle None appearing as the stream-level
notification settings, so for backwards-compatibility we arrange to
only send True/False to the mobile apps by applying those defaults
server-side. We introduce a notification_settings_null value within a
client_capabilities structure that newer versions of the mobile apps
can use to request the new model.
This mobile compatibility code is pretty effectively tested by the
existing test_events tests for the subscriptions subsystem.
Currently there's no way to tell the difference between "a server admin
deactivated a realm due to it being spammy" vs "a realm admin deactivated
the realm".
Due to my misreading the code and a sloppy search, I thought in
8218bf101c that
all_stream_subscription_logs didn't filter for streams.
While changing this, we'll switch to using `.modified_stream_id` for
potentially better performance.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit replaces the `create_stream_by_admins_only` setting with a
new `create_stream_policy` setting, which mirroring the structure of
the existing `invite_to_stream_policy`.
This is important preparation for migrating the waiting period feature
to be its own independent setting.
Fixes#12236.