Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Now we can also include extra keyword arguments to specify
modifications in how the example code should be generated
in the generate_code_example template tag.
E.g. generate_code_example(curl, exclude=["param1", "param2"])
This commit extends api_code_examples.py to support automatically
generating cURL examples from the OpenAPI documentation. This way
work won't have to be repeated and we can also drastically reduce
the chance of introducing faulty cURL examples (via. an automated
test which can now be easily created).
This commit progress our efforts to reduce pending_endpoints
as well as to migrate away from templates/zerver/api/fixtures
and towards our OpenAPI documentation.
Similar to commit d62b75fc.
The current code looks like it's trying to redirect /integrations/doc/email
to /integrations when EMAIL_GATEWAY_PATTERN is not set.
I think it doesn't currently do this. The test for that pathway has a bug:
self.get_doc('integrations/doc-html/email', subdomain='zulip') needs a
leading slash, and putting the slash back in results in the test failing.
This redirection is not really desired behavior -- better is to
unconditionally show that the email integration exists, and just point the
user to https://zulip.readthedocs.io/en/latest/production/email-gateway.html
(this is done in a child commit).
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our implementation requires at least 1 space after the
'#' not not break existing linkifiers like '#123', etc.
that generally follow the convention we show in linkifier
examples.
- [valid] : # Hello
- [valid] : # Hello
- [invalid]: #Hello
For the frontend, we have taken the code from v0.7.0 of
upstream marked and made minor changes to avoid having
to refactor a significant part of our marked code.
For the backend, we merely have to change the regex to
force require spaces after #, and add hashheader to our
list of blockparsers.
Fixes#11418.
Fixes#11209.
This requires changing how zadd is used in rate_limiter.py:
In redis-py >= 3.0 the pairs to ZADD need to be passed as a dictionary,
not as *args or **kwargs, as described at
https://pypi.org/project/redis/3.2.1/ in the section
"Upgrading from redis-py 2.X to 3.0".
The rate_limiter change has to be in one commit with the redis upgrade,
because the dict format is not supported before redis-py 3.0.
As a result of dropping support for trusty, we can remove our old
pattern of putting `if False` before importing the typing module,
which was essential for Python 3.4 support, but not required and maybe
harmful on newer versions.
cron_file_helper
check_rabbitmq_consumers
hash_reqs
check_zephyr_mirror
check_personal_zephyr_mirrors
check_cron_file
zulip_tools
check_postgres_replication_lag
api_test_helpers
purge-old-deployments
setup_venv
node_cache
clean_venv_cache
clean_node_cache
clean_emoji_cache
pg_backup_and_purge
restore-backup
generate_secrets
zulip-ec2-configure-interfaces
diagnose
check_user_zephyr_mirror_liveness
This feature is intended to cover all of our ways of exporting a
realm, not just the initial "public export" feature, so we should name
things appropriately for that goal.
Additionally, we don't want to include data exports in page_params;
the original implementation was actually buggy and would have.
When a person creates a new realm, they'll likely want to create a
bunch of initial streams at once. When doing so, it could be annoying
to have to mark all of the new stream notification messages as read.
Thus to make this process smoother, we should automatically mark
the messages generated by the Notification Bot in the notifications
(announcements) stream, as well as in the newly created stream itself
as read by the stream creator.
Fixes#12765.
While it's true `datetime` is implicit via `pytz`, it makes sense
that mypy should now complain about the semantics of calling our
return type `pytz.datetime.tzinfo`, when such a type doesn't
actually exist.
Investigation into #12876, a mysterious bug where users were seeing
messages reappear as unread, determined that the root cause was
missing headers to disable client-side caching for Zulip's REST API
endpoints.
This manifested, in particular, for `GET /messages`, which is
essentially the only API GET endpoint used by the webapp at all. When
using the `Ctrl+Shift+T` feature of browsers to restore a recently
closed tab (and potentially other code paths), the browser would
return from its disk cache a cached copy of the GET /messages results.
Because we include message flags on messages fetched from the server,
this in particular meant that those tabs would get a stale version of
the unread flag for the batches of the most recent ~1200 messages that
Zulip fetches upon opening a new browser tab.
The issue took same care to reproduce as well, in large part because
the arguments to those initial GET /messages requests will vary as one
reads messages (because the `pointer` moves forward) and then enters
the "All messages" view; the disk cache is only used for GET requests
with the exact same URL parameters.
We will probably still want to merge the events error-handling changes
we had previously proposed for this, but the conclusion of this being
a straightforward case of missing cache-control headers is much more
satisfying than the "badly behaving Chrome" theory discussed in the
issue thread.
Fixes#12876.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In the rare case that Zulip receives an email with only an HTML
format, we originally (code dating to 2013) shelled out to
html2markdown/python-html2text in order to convert the HTML into
markdown.
We long since added html2text as a reasonably managed Python
dependency of Zulip; we should just use it here.
This change serves to declutter webhook-errors.log, which is
filled with too many UnexpectedWebhookEventType exceptions.
Keeping UnexpectedWebhookEventType in zerver/lib/webhooks/common.py
led to a cyclic import when we tried to import the exception in
zerver/decorators.py, so this commit also moves this exception to
another appropriate module. Note that our webhooks still import
this exception via zerver/lib/webhooks/common.py.
Changed the requirements for UserProfile in order to allow use of
the formataddr function in send_mail.py.
Converted send_email to use formataddr in conjunction with the commit
that strengthened requirements for full_name, such that they can now be
used in the to field of emails.
Fixes#4676.
This changes the requirements for UserProfile to disallow some
additional characters, with the overall goal of being able to use
formataddr in send_mail.py.
We don't need to be particularly careful in the database migration,
because user full_names are not required to be unique.
We were seeing errors when pubishing typical events in the form of
`Dict[str, Any]` as the expected type to be a `Union`. So we instead
change the only non-dictionary call, to pass a dict instead of `str`.
Per the import line:
`from unittest import loader, runner # type: ignore # Mypy cannot pick
these up.`
Because `TextTestResult` inherits from `runner.TextTestResult`, mypy
doesn't see `self` as having an attribute `stream`, so we ignore these
instead of cluttering with `casts` or `isinstances`.
Apparently, a subtle mismatch between the filename/URL formats for our
upload codebases meant that importing Slack avatars into systems using
S3_UPLOAD_BACKEND would end up with the avatars having the wrong URLs.
In commit 7c71e98, we added a special exception for the
/users/me/subscriptions endpoint in the automatic validation test.
By adding some extra documentation, we now remove this extra code,
as well as the endpoint from the list of pending endpoints.
The previous iteration did not properly handle languages with a
different word order than English.
Discovered via warning output in `manage.py makemessages`.
If a url doesn't have a scheme, browsers would treat it as a relative
url and open something like: https://chat.zulip.org/google.com instead.
This PR fixes the issue on the backend; the frontend implementation
remains out of sync and the user sending the message wouldn't see
any linkification for urls without a scheme.
Fixes#12791.
The test_docs change is because Django runs test cases with DEBUG =
False, which ordinarily means it doesn’t serve /static during tests.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
There’s no reason to monkey-patch something that we were already
subclassing.
Removing the PRODUCTION conditional causes us to generate
staticfiles.json in the right place to begin with so we don’t need to
move it later. It also allows Django to find staticfiles.json if
running the dev server with PIPELINE_ENABLED.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This fixes a user-visible bug, where users signing up for realms with
restricted email visibility get reminder emails 1 week later, whether or not
they created an account.
Making sender name go in-line with message body only if
the html starts with <p> tag since it won't look good
if the message starts with a code snippet, ul, etc.
If message starts with p tag we can safely assume that
it can go in-line with sender name.
It was discovered that errors such as:
`OSError: [Errno 16] Device or resource busy`
potentially arise when running in serial mode, or with
explicit test cases passed to `test-backend`.
In the unlikely event that someone edited the properties of a system
bot and then saved the result, we were still caching the old version
indefinitely in the get_system_bot cache.
This led to a confusing case where a newly installed Zulip server
didn't have is_api_super_user properly set on its EMAIL_GATEWAY_BOT in
memcached.
Co-authored-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Previously we sent "" for stream_name where we should have sent None, which
made this function harder to understand. The "" value was never used.
This also reorders the arguments to be match the order of the arguments in
the two callers.
This commit adds a new setting to the user's notification settings that
will change the behaviour of the unread count in the title bar and
desktop application.
When enabled, the title bar will show the count of unread private messages
and mentions. When disabled, the title bar will act as before, showing
the total number of unread messages.
Fixes#1736.
Modified by punchagan to:
* Replace URLs with titles only if the inline url embed previews are turned on
* Add a test for youtube titles replacing URLs
The titles for the videos are fetched asynchronously after the message has been
sent via the code that fetches metadata for open graph previews. So, the URLs
are replaced with titles only if the inline embed url previews feature is
enabled.
Ideally, YouTube previews should be shown only if inline url previews are
enabled, but this feature is in beta, while YouTube previews are pretty stable.
Once this feature is out of beta, YouTube previews should be shown only if the
url previews feature is turned on.
YouTube preview image is calculated as soon as the message is sent, while the
title needs to be fetched using a network request. This means that the URL is
replaced only after the data has been fetched from the request, and happens a
couple of seconds after the message has been rendered.
Closes#7549
Messages with links embedded in blockquotes turn out to be replies to
messages with links, more often than not. Showing previews for links in
replies seems like clutter, and it seems reasonable to turn off previews for
such links.
We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
This will make it easier to have access to the stream creator.
The indirection also isn't really adding anything, especially given that the
announce message is inlined just above.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
The previous code for the validator test was fairly messy due to
checking for both formats of the openapi url, one with
<variable_name> and the other with {variable_name}. To eliminate
this, we have standardized the format and restricted it to
{variable_name} as per the official format at:
https://swagger.io/docs/specification/describing-parameters.
Add new custom profile field type, External account.
External account field links user's social media
profile with account. e.g. GitHub, Twitter, etc.
Fixes part of #12302
We can simply archive cross-realm personal messages according to the
retention policy of the recipient's realm. It requires adding another
message-archiving query for this case however.
What remains is to figure out how to treat cross-realm huddle messages.
This reverts commit 8f15884c7d. Using the
WITH ( ) ... DELETE method leads to a small performance drop, while
probably not offering many positives, so it seems appropriate to go to
the simpler case of just letting things get cleaned up by CASCADE.
The way the code changed in this commit was written caused Django to
fetch stream.realm from the database for every stream, leading to
redundant, identical queries. Each stream's realm is already known, so
we use that information.
In addition to the test which checks to to see if each endpoint in
code (urls.py) is documented in the openapi documentation (and with
the right arugments). We now also have a test to see if every
endpoint in the openapi documentation is a legitimate endpoint
also existing in code.
We do this by piggy-backing on the work done be the former test and
using set operations. This method avoid the need for an extra loop
and it uses set operations for additional speed and ease of reading.
By importing a few view modules in the validation test itself we
can remove a few endpoints which were marked as buggy. What was
happening was that the view functions weren't imported and hence
the arguments map was not filled. Thus the test complained that
there was documentation for request parameters that seemed to be
missing in the code. Also, for the events register endpoint, we
have renamed one of the documented request parameters from
"stream" to "topic" (the API itself was not modified though).
We add a new "documentation_pending" attribute to req variables
so that any arguments not currently documented but should be
documented can be properly accounted for.
The conditional block containing the tarball upload logic for both S3
and local uploads was deconstructed and moved to the more appropriate
location within `zerver/lib/upload.py`.
This change is preliminary refactoring in order to improve the test
mocking strategy related to `test_realm_export.py`.
What this allows is the ability to simply mock a return value from
`do_export_realm`. We can then use that value as a dummy url to
ensure a file has been served and can be retrieved.
Duplicate handling when INSERTing is switched from "LEFT JOIN ... id IS
NULL" approach to "ON CONFLICT (id) DO NOTHING", since we now have
postgresql 9.5. The ON CONFLICT approach is more natural as well as also
potentially being faster,
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We had gotten to the right documented content, but the previous
chaining of object creation/deletion wasn't quite right.
Fix this in a way that reduces the amount by which tests are dependent
on what other tests are doing.
This is a dramatic redesign of the look and feel of our missed-message
emails, designed to decrease the feeling of clutter and just provide
the content users care about in a clear, visible fashion.
This cleans up the reply_warning feature in favor of a more coherent
explanation of whether or not one can reply.
(Also, critically, it now advertises the ability to enable
missed-message email replies with some administrative configuration
work.)
We reuse the link regexes we use elsewhere inn markdown
for parsing links in topic names and add a button to open
them in new tabs similar to our behavior with linkifiers
in topic names.
Fixes#12391.
When archiving Messages, we stop relying on LEFT JOIN ... IS NULL to
avoid duplicates when INSERTing. Instead we use ON CONFLICT DO UPDATE
(added in postgresql 9.5) to, in case of archiving a Message that
already has a corresponding archived objects (this happens if a Message
gets archived, restored and then archived again), re-assign the existing
ArchivedMessage to the new transaction.
This also allows us to fix test_archiving_messages_second_time, which
was temporarily disable a few commits before.
We combine run_message_batch_query and run_archiving_in_chunks
functions, which makes the code simpler and more readable - we get rid
of hacky generator usage, for example.
In the process, move_expired_messages_* functions are adjusted, and now
they archive Messages as well as their related objects.
Appropriate adjustments in reaction to this are made in the main
archiving functions which call move_expired_messages_* (they no longer
need to call move_related_objects_to_archive).
Instead of having a bunch of custom code in the function, we make it use
run_message_batch_query and run_archiving_in_chunks to do the necessary
operations in a consistent way, using the same codepaths as the rest of
the archiving system.
This breaks test_archiving_messages_second_time temporarily, but we will
fix it and re-enable the test in the next commits, where we'll address
various other issues with re-archiving of messages.
We also remove the @transaction.atomic wrapper, because atomicity is
handled by the logic inside run_archiving_in_chunks.
We add a new model, ArchiveTransaction, to tie archived objects together
in a coherent way, according to the batches in which they are archived.
This enables making a better system for restoring from archive, and it
seems just more sensible to tie the archived objects in this way, rather
the somewhat vague setting of archive_timestamp to each object using
timezone_now().
For storing HTTP headers as a function of fixture name, previously
we required that the fixture_to_headers method should reside in a
separate module called headers.py.
However, as in many cases, this method will only take a few lines,
we decided to move this function into the view.py file of the
integration instead of requiring a whole new file called headers.py
This commit introduces the small change in the system architecture,
migrates the GitHub integration, and updates the docs accordingly.
In the GitHub integration we established that for many integrations,
we can directly map the fixture filename to the set of required
headers and by following a simple naming convention we can greatly
ease the logic involved in fixture_to_headers method required .
So to prevent the need for duplicating the logic used by the GitHub
integration, we created a method called `get_http_headers_from_filename`
which will take the name of the HTTP header (key) and then return a
corresponding method (in a decorator-like fashion) which could then be
equated to fixture_to_headers in headers.py.
The GitHub integration was modified to use this method and the docs
were updated to suggest using this when possible.
When parsing custom HTTP headers in the integrations dev panel, http
headers from fixtures system and the send_webhook_fixture_message
we now use a singular source of logic: standardize_headers which
will take care of converting a dictionary of input headers into a
standard form that Django expects.
Using this system, we can now associate any fixture of any integration
with a particular set of HTTP headers. A helper method called
determine_http_headers was introduced, and the test suite was upgraded
to use determine_http_headers.
Comments and documentation significantly edited by tabbott.
This function is an alternative to get_admin_users that we use in all
places where we explicitly want only human administrative users (not
administrative bots). The following commits will rename
get_admin_users for better clarity.
Our recently-added code for rewriting user IDs on data import didn't
correctly handle wildcard mentions and mentions generated by very old
versions of Zulip (pre data-user-id).
The previous query ended up doing an awkward join that did not
guarantee use of the Recipient index on zerver_message, turning a very
fast query into something that could take much longer for a single
stream than the rest of the import combined.
We also document support for user IDs in the pm-with narrow operator.
Edited by tabbott to document on /api rather than in the /help page.
Fixes part of #9474.
Namely, here we add the "plan_includes_wide_organization_logo" and
"upgrade_text_for_wide_organization_logo" to the page_params (which
is set in zerver/lib/events.py).
"plan_includes_wide_organization_logo" is True if the plan is not of
the Realm.LIMITED type. We need to add this extra boolean parameter
instead of just using "realm_plan_type" to make things a lot easier
to work with on the frontend side, especially considering that
handlebars won't allow checking for equality in its {{#if}} blocks.
When a realm's plan type is updated using "do_change_plan_type" we
notify active users of the realm. This way certain plan features
could be enabled instantaneously for active users.
A function was written in `test_fixtures.py` to drop a test database
template if the corresponding database id doesn't belong to a file.
Alongside this fact, every file that is written is removed after 60
minutes. Meaning any potential database template can never exist
longer than one hour.
This follow-up work was added to deal with the potential race
conditions when running `test-backend`. Ensuring that all templates
are properly dealt with.
Essentially rewritten by tabbott for cleanliness.
Fixes the remainder of #12426.
The ids that will be used for each particular run of the test suite are
written to a unique file. Each file will then be used as a time
reference of when the suite was ran.
This change sets up the ability for a complete clean up of potentially
leaked database templates.
Tweaked by tabbott to remove these files after successful database
cleanup.
When running the test-backend suite in serial mode, `destroy_test_db`
double appends the database id number to the template if passed an
argument for `number`. The comment here explains this behavior.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
Rather than relying on the CASCADING property of the ForeignKey to the
Message table to clean up these objects, we delete them in the same
query as we archive them - since it's guaranteed that any of these
objects that we archive will be deleted due to their Message being
deleted later.
We don't have this guarantee for Attachment objects, which is why we
can't apply this scheme to them.
To ensure the database retains a consistent state if archiving gets
interrupted, we process each Messages chunk together with related
objects in a single atomic transaction.
We had two duplicate functions for archiving zerver_attachment_messages
rows, doing the same thing - archiving by message_id. One of them had a
redundant INNER JOIN, so we get rid of that too.
Since we loop over realms in the functions for archiving stream messages
and then personal+huddle messages, and also want to split cleaning up
attachments by realm - it makes sense to do it all in one single loop.
Rename notification property `enable_stream_sounds` to
`enable_stream_audible_notifications` to match with other
notification property patterns.
Fixes part of #12304
We batch queries that archive Messages, to limit the maximum amount of
Message objects archived in a single query. This leads to the archiving
of other related objects being batched as well, because we loop over
chunks of archived messages and archive their related objects per-chunk.
N = self.parallel templates are created, and these templates were
previously named 'zulip_test_template_<1, N>'. However, to support
running multiple instances of `test-backend`, a unique
`random_id_range_start` was created for each template database.
There was no problem prior because the templates would simply be
used again and thus did not require any clean up. Now that there are
unique database names being created, every time `test-backend` is run
these templates can accumulate on disk. Instead, we clean up our
templates at the end of every complete run of the test suite, or upon a
SIGINT.
Fixes: #12426
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
UserMessages no longer need special handling, they can be archived by
move_models_with_message_key_to_archive and automatically cleaned up
like the other models with a message key with CASCADING=True.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Since we archive attachments and attachment_messages tied to a list of
ids of Messages that we just archived (so from the current realm), it's
unnecessary to check their realm in the queries. This could potentially
cause archiving of an attachment with realm_id of another realm, but
this isn't an issue, as long as we make sure we don't end up deleting
the original Attachment object incorrectly - but realm_id check is
included in delete_expired_attachments() to ensure that.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We were apparently reusing the path for both the development and test
databases, which meant that we would not always correctly run
`generate_fixtures` when changes were required.
This was a recent regression introduced when we added this cache a few
days ago.
We add RETURNING to fetch relevant message and usermessage ids in
archiving queries and use them to make other queries faster and slower.
A side-effect of this implementation is that with cross-realm messages,
the UserMessage of the recipient and the Message will not be deleted -
but cross-realm messages are rare, will still get correctly put in the
archive tables and so failing to delete should not be a problem for now.
They will be fully handled later.
zerver_archivedmessage is already INNER JOIN-ed earlier in the query, so
we check the pub_date in it, instead of joining zerver_message, which
would just redundantly join the analogical rows.
lxml parser appends html and body tags to the soup object which
are not reqired. There are no other major parsing diffrences between
the two parsers as long the HTML input is perfectly formated.
lxml parser is much faster than html.parser but it hardly matters
in our case.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-
between-parsers
In addition to the "+show-sender" option, we now add "+include-footers"
which disables stripping of the footer from the email body if this token
is included in the email address.
To enable a comfortable way of adding more optional tokens in the
address (like current '+show-sender') we change decode_email_address to
return a general dictionary containing options specified through adding
these optional tokens in the To: address. For now, we only have
"+show-sender", but more can be easily added using this change.
Instead of running `what_to_do_with_migrations` unconditionally, we
first hash and compare the files located in `*/migrations/*`. Only if
a migration file has changed (or the hash file does not exist yet) do we
call `what_to_do_with_migrations`.
It was discovered that the call to Django's `showmigrations.py` file was
causing roughly a 500ms increase in `test-backend`'s start up time.
However, this fix only saves about 100ms, apparently because a lot of
that work was importing Django dependnecies we need for most tests
anyway.
Fixes: #12428.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
We add general code that will archive models that are tied to a specific
Message (such as Reactions and SubMessages). Certain details of the
model are grabbed from a list models_with_message_key, and then used to
create queries that will archive these database tables.
We put Reaction in that list in this commit, and add appropriate tests.
To have archiving of other analogical models (for example SubMessage),
one only needs to make an appropriate entry in the
models_with_message_key list.