The approach taken here is basically use user IDs in operator that
support it when sending the request for fetching the messages
(see comments in code for more details).
Modified by punchagan to:
* Replace URLs with titles only if the inline url embed previews are turned on
* Add a test for youtube titles replacing URLs
The titles for the videos are fetched asynchronously after the message has been
sent via the code that fetches metadata for open graph previews. So, the URLs
are replaced with titles only if the inline embed url previews feature is
enabled.
Ideally, YouTube previews should be shown only if inline url previews are
enabled, but this feature is in beta, while YouTube previews are pretty stable.
Once this feature is out of beta, YouTube previews should be shown only if the
url previews feature is turned on.
YouTube preview image is calculated as soon as the message is sent, while the
title needs to be fetched using a network request. This means that the URL is
replaced only after the data has been fetched from the request, and happens a
couple of seconds after the message has been rendered.
Closes#7549
Messages with links embedded in blockquotes turn out to be replies to
messages with links, more often than not. Showing previews for links in
replies seems like clutter, and it seems reasonable to turn off previews for
such links.
We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
We implicitly assume settings.NOTIFICATION_BOT is not None just a few lines
above, in
sender = get_system_bot(settings.NOTIFICATION_BOT)
notifications.append(
internal_prep_private_message(
realm=user_profile.realm,
sender=sender,
...
This will make it easier to have access to the stream creator.
The indirection also isn't really adding anything, especially given that the
announce message is inlined just above.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
Extract some logical segments of test_openapi_arguments into
individual (helper) functions. E.g. extraction of the regex
to OpenAPI URL format conversion and testing.
The previous code for the validator test was fairly messy due to
checking for both formats of the openapi url, one with
<variable_name> and the other with {variable_name}. To eliminate
this, we have standardized the format and restricted it to
{variable_name} as per the official format at:
https://swagger.io/docs/specification/describing-parameters.
These updates are added as a direct result of the new strategy related
to the the following refactorings:
* Having `do_export_realm` return the value of the tarball path.
See 6e187e974a4e6282d3616312bdfa19d0d2a949d1.
* Moving the upload logic for s3 and local tarball storage out of
`export_realm_wrapper` and into `upload.py`.
See f1041e1fb6cb60f2c53b294695245e4c86a4d40b.
Add new custom profile field type, External account.
External account field links user's social media
profile with account. e.g. GitHub, Twitter, etc.
Fixes part of #12302
Rename URL type custom profile field in populate db to avoid confusion
with the "GitHub profile" custom external account profile field we'll
be adding shortly.
We can simply archive cross-realm personal messages according to the
retention policy of the recipient's realm. It requires adding another
message-archiving query for this case however.
What remains is to figure out how to treat cross-realm huddle messages.
This reverts commit 8f15884c7d. Using the
WITH ( ) ... DELETE method leads to a small performance drop, while
probably not offering many positives, so it seems appropriate to go to
the simpler case of just letting things get cleaned up by CASCADE.
The way the code changed in this commit was written caused Django to
fetch stream.realm from the database for every stream, leading to
redundant, identical queries. Each stream's realm is already known, so
we use that information.
In addition to the test which checks to to see if each endpoint in
code (urls.py) is documented in the openapi documentation (and with
the right arugments). We now also have a test to see if every
endpoint in the openapi documentation is a legitimate endpoint
also existing in code.
We do this by piggy-backing on the work done be the former test and
using set operations. This method avoid the need for an extra loop
and it uses set operations for additional speed and ease of reading.
The main things targeted by the refactor are the usage of comments and
moving the top-level variables to the scope of the class.
The movement of variables was to facilitate allowing us to perform
a reverse mapping test from OpenAPI URLs -> Code defined URLs.
By importing a few view modules in the validation test itself we
can remove a few endpoints which were marked as buggy. What was
happening was that the view functions weren't imported and hence
the arguments map was not filled. Thus the test complained that
there was documentation for request parameters that seemed to be
missing in the code. Also, for the events register endpoint, we
have renamed one of the documented request parameters from
"stream" to "topic" (the API itself was not modified though).
We add a new "documentation_pending" attribute to req variables
so that any arguments not currently documented but should be
documented can be properly accounted for.
DELETing from archive tables and ALTERing ArchivedMessage needs to be
split into separate transactions.
zerver_archivedattachment_messages needs to be cleared out before
zerver_archivedattachment.
The conditional block containing the tarball upload logic for both S3
and local uploads was deconstructed and moved to the more appropriate
location within `zerver/lib/upload.py`.
This change is preliminary refactoring in order to improve the test
mocking strategy related to `test_realm_export.py`.
What this allows is the ability to simply mock a return value from
`do_export_realm`. We can then use that value as a dummy url to
ensure a file has been served and can be retrieved.