In the validation test, we now use a different message for when there
is an endpoint in pending_endpoints with some documentation already.
This change is a bit hackish, but it's okay since we'll be removing it
once we've resolved all pending endpoints (which is bound to happen).
If a url doesn't have a scheme, browsers would treat it as a relative
url and open something like: https://chat.zulip.org/google.com instead.
This PR fixes the issue on the backend; the frontend implementation
remains out of sync and the user sending the message wouldn't see
any linkification for urls without a scheme.
Fixes#12791.
The test_docs change is because Django runs test cases with DEBUG =
False, which ordinarily means it doesn’t serve /static during tests.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Making sender name go in-line with message body only if
the html starts with <p> tag since it won't look good
if the message starts with a code snippet, ul, etc.
If message starts with p tag we can safely assume that
it can go in-line with sender name.
As of commit 8c199fd44c (#12667) this
file is no longer generated. Handlebars compile errors are raised as
webpack errors.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The decorator running at import time was causing directory
creation in the project's root.
One could imagine linting for this, but it seems unlikely that similar
code will be added in the future; the problem one would be trying to
solved is already addressed by default in the framework now.
In the unlikely event that someone edited the properties of a system
bot and then saved the result, we were still caching the old version
indefinitely in the get_system_bot cache.
This led to a confusing case where a newly installed Zulip server
didn't have is_api_super_user properly set on its EMAIL_GATEWAY_BOT in
memcached.
Co-authored-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
This commit adds a new setting to the user's notification settings that
will change the behaviour of the unread count in the title bar and
desktop application.
When enabled, the title bar will show the count of unread private messages
and mentions. When disabled, the title bar will act as before, showing
the total number of unread messages.
Fixes#1736.
Modified by punchagan to:
* Replace URLs with titles only if the inline url embed previews are turned on
* Add a test for youtube titles replacing URLs
The titles for the videos are fetched asynchronously after the message has been
sent via the code that fetches metadata for open graph previews. So, the URLs
are replaced with titles only if the inline embed url previews feature is
enabled.
Ideally, YouTube previews should be shown only if inline url previews are
enabled, but this feature is in beta, while YouTube previews are pretty stable.
Once this feature is out of beta, YouTube previews should be shown only if the
url previews feature is turned on.
YouTube preview image is calculated as soon as the message is sent, while the
title needs to be fetched using a network request. This means that the URL is
replaced only after the data has been fetched from the request, and happens a
couple of seconds after the message has been rendered.
Closes#7549
Messages with links embedded in blockquotes turn out to be replies to
messages with links, more often than not. Showing previews for links in
replies seems like clutter, and it seems reasonable to turn off previews for
such links.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
Extract some logical segments of test_openapi_arguments into
individual (helper) functions. E.g. extraction of the regex
to OpenAPI URL format conversion and testing.
The previous code for the validator test was fairly messy due to
checking for both formats of the openapi url, one with
<variable_name> and the other with {variable_name}. To eliminate
this, we have standardized the format and restricted it to
{variable_name} as per the official format at:
https://swagger.io/docs/specification/describing-parameters.
These updates are added as a direct result of the new strategy related
to the the following refactorings:
* Having `do_export_realm` return the value of the tarball path.
See 6e187e974a4e6282d3616312bdfa19d0d2a949d1.
* Moving the upload logic for s3 and local tarball storage out of
`export_realm_wrapper` and into `upload.py`.
See f1041e1fb6cb60f2c53b294695245e4c86a4d40b.
Add new custom profile field type, External account.
External account field links user's social media
profile with account. e.g. GitHub, Twitter, etc.
Fixes part of #12302
Rename URL type custom profile field in populate db to avoid confusion
with the "GitHub profile" custom external account profile field we'll
be adding shortly.
We can simply archive cross-realm personal messages according to the
retention policy of the recipient's realm. It requires adding another
message-archiving query for this case however.
What remains is to figure out how to treat cross-realm huddle messages.
In addition to the test which checks to to see if each endpoint in
code (urls.py) is documented in the openapi documentation (and with
the right arugments). We now also have a test to see if every
endpoint in the openapi documentation is a legitimate endpoint
also existing in code.
We do this by piggy-backing on the work done be the former test and
using set operations. This method avoid the need for an extra loop
and it uses set operations for additional speed and ease of reading.
The main things targeted by the refactor are the usage of comments and
moving the top-level variables to the scope of the class.
The movement of variables was to facilitate allowing us to perform
a reverse mapping test from OpenAPI URLs -> Code defined URLs.
By importing a few view modules in the validation test itself we
can remove a few endpoints which were marked as buggy. What was
happening was that the view functions weren't imported and hence
the arguments map was not filled. Thus the test complained that
there was documentation for request parameters that seemed to be
missing in the code. Also, for the events register endpoint, we
have renamed one of the documented request parameters from
"stream" to "topic" (the API itself was not modified though).
We add a new "documentation_pending" attribute to req variables
so that any arguments not currently documented but should be
documented can be properly accounted for.
The conditional block containing the tarball upload logic for both S3
and local uploads was deconstructed and moved to the more appropriate
location within `zerver/lib/upload.py`.
This reverts commit f476ec7fac (#10312)
and replaces it with a proper fix using Jinja2 raw blocks.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In each url of urls.py, if we want to mark an endpoint as being
intentionally undocumented, then in the kwargs instead of directly
mapping like 'METHOD': 'zerver.views.package.foo', we can provide
a tag called 'intentionally_undocumented' and map like:
'METHOD': ('zerver.views.package.foo', {'intentionally_undocumented'}).
If an endpoint is marked as intentionally undocumented, but we find
some OpenAPI documentation for it then we'll throw an error, in which
case either the 'intentionally_undocumented' tag should be removed or
the faulty documentation should be removed.
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
This is a dramatic redesign of the look and feel of our missed-message
emails, designed to decrease the feeling of clutter and just provide
the content users care about in a clear, visible fashion.
This cleans up the reply_warning feature in favor of a more coherent
explanation of whether or not one can reply.
(Also, critically, it now advertises the ability to enable
missed-message email replies with some administrative configuration
work.)
In 93914d8cd8, we intended to change our
markdown processor to add support for multi-line /me messages.
However, we neglected to change the backend processor, resulting in
the change only taking effect for the user sending the message :(.
We fix this by changing the backend processor too.
Fixes#12450.
We reuse the link regexes we use elsewhere inn markdown
for parsing links in topic names and add a button to open
them in new tabs similar to our behavior with linkifiers
in topic names.
Fixes#12391.
When archiving Messages, we stop relying on LEFT JOIN ... IS NULL to
avoid duplicates when INSERTing. Instead we use ON CONFLICT DO UPDATE
(added in postgresql 9.5) to, in case of archiving a Message that
already has a corresponding archived objects (this happens if a Message
gets archived, restored and then archived again), re-assign the existing
ArchivedMessage to the new transaction.
This also allows us to fix test_archiving_messages_second_time, which
was temporarily disable a few commits before.
Instead of having a bunch of custom code in the function, we make it use
run_message_batch_query and run_archiving_in_chunks to do the necessary
operations in a consistent way, using the same codepaths as the rest of
the archiving system.
This breaks test_archiving_messages_second_time temporarily, but we will
fix it and re-enable the test in the next commits, where we'll address
various other issues with re-archiving of messages.
We also remove the @transaction.atomic wrapper, because atomicity is
handled by the logic inside run_archiving_in_chunks.
For storing HTTP headers as a function of fixture name, previously
we required that the fixture_to_headers method should reside in a
separate module called headers.py.
However, as in many cases, this method will only take a few lines,
we decided to move this function into the view.py file of the
integration instead of requiring a whole new file called headers.py
This commit introduces the small change in the system architecture,
migrates the GitHub integration, and updates the docs accordingly.
The markup output changed but the rendering is the same, so modified
expected output in tests.
There is a regression introduced in one of the new versions of KaTeX,
which produces a warning in our node tests:
```
No character metrics for ' ' in style 'Main-Bold'
```
but the rendering is correct so we can ignore it.
Tracking issue: KaTeX/KaTeX#1994
Fixes#12472.
When parsing custom HTTP headers in the integrations dev panel, http
headers from fixtures system and the send_webhook_fixture_message
we now use a singular source of logic: standardize_headers which
will take care of converting a dictionary of input headers into a
standard form that Django expects.
Previously, our Github authentication backend just used the user's
primary email address associated with GitHub, which was a reasonable
default, but quite annoying for users who have several email addresses
associated with their GitHub account.
We fix this, by adding a new screen where users can select which of
their (verified) GitHub email addresses to use for authentication.
This is implemented using the "partial" feature of the
python-social-auth pipeline system.
Each email is displayed as a button. Clicking on that button chooses
the email. The email value is stored in a hidden input above the
button. The `primary_email` is displayed on top followed by
`verified_non_primary_emails`. Backend name is also passed as
`backend` to the template, which in our case is GitHub.
Fixes#9876.
Using this system, we can now associate any fixture of any integration
with a particular set of HTTP headers. A helper method called
determine_http_headers was introduced, and the test suite was upgraded
to use determine_http_headers.
Comments and documentation significantly edited by tabbott.
This function is an alternative to get_admin_users that we use in all
places where we explicitly want only human administrative users (not
administrative bots). The following commits will rename
get_admin_users for better clarity.
We also document support for user IDs in the pm-with narrow operator.
Edited by tabbott to document on /api rather than in the /help page.
Fixes part of #9474.
Namely, here we add the "plan_includes_wide_organization_logo" and
"upgrade_text_for_wide_organization_logo" to the page_params (which
is set in zerver/lib/events.py).
"plan_includes_wide_organization_logo" is True if the plan is not of
the Realm.LIMITED type. We need to add this extra boolean parameter
instead of just using "realm_plan_type" to make things a lot easier
to work with on the frontend side, especially considering that
handlebars won't allow checking for equality in its {{#if}} blocks.
When a realm's plan type is updated using "do_change_plan_type" we
notify active users of the realm. This way certain plan features
could be enabled instantaneously for active users.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
To ensure the database retains a consistent state if archiving gets
interrupted, we process each Messages chunk together with related
objects in a single atomic transaction.
Rename notification property `enable_stream_sounds` to
`enable_stream_audible_notifications` to match with other
notification property patterns.
Fixes part of #12304
We batch queries that archive Messages, to limit the maximum amount of
Message objects archived in a single query. This leads to the archiving
of other related objects being batched as well, because we loop over
chunks of archived messages and archive their related objects per-chunk.
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We add RETURNING to fetch relevant message and usermessage ids in
archiving queries and use them to make other queries faster and slower.
A side-effect of this implementation is that with cross-realm messages,
the UserMessage of the recipient and the Message will not be deleted -
but cross-realm messages are rare, will still get correctly put in the
archive tables and so failing to delete should not be a problem for now.
They will be fully handled later.
In addition to the "+show-sender" option, we now add "+include-footers"
which disables stripping of the footer from the email body if this token
is included in the email address.
To enable a comfortable way of adding more optional tokens in the
address (like current '+show-sender') we change decode_email_address to
return a general dictionary containing options specified through adding
these optional tokens in the To: address. For now, we only have
"+show-sender", but more can be easily added using this change.
The RealmAuditLog object ID was stored in the event sent to the
deferred_work queue as a means to update the row's extra_data field.
The extra_data field then stores the location of the export.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
We add general code that will archive models that are tied to a specific
Message (such as Reactions and SubMessages). Certain details of the
model are grabbed from a list models_with_message_key, and then used to
create queries that will archive these database tables.
We put Reaction in that list in this commit, and add appropriate tests.
To have archiving of other analogical models (for example SubMessage),
one only needs to make an appropriate entry in the
models_with_message_key list.
Previously, if you exported a Zulip organization and then re-imported
it, we'd end up renumbering the user IDs and all direct foreign key
references to them in the database, but not the data-user-id
references in mentions. Fix this by parsing the message content and
doing that renumbering.
(Because we import raw markdown, not HTML, from third-party tools,
these changes won't affect data import from slack etc.)
Fixes the high-priority part of #11293.
Modifies the dict with the user info to include the key `bot_owner_id`
so it can be displayed in the user info popover.
Tests concerned with changing bot owner have been modified to have
number of events=2 because while updating the bot info, two events
are fired -- updating the `realm_bot` and `realm_user` since the
key `bot_owner_id` is a part of realm user info.
Since positional arguments are interpreted differently by different
backends in Django's authentication backend system, it’s safer to
disallow them.
This had been the motivation for previously declaring the parameters
with default values when we were on Python 2, but that was not super
effective because Python has no rule against positional default
arguments and that convention for our authentication backends was
solely enforced by code review.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The `queue_data` variable is an intermediate step that's unnecessary.
Instead, the values from the queue event are assigned dierectly.
Also, the `worker` variable is not worth an assignment as it is only
referenced a single time per test case.
A FileNotFound error was set as the side-effect of the do_export_realm
mock and the DeferredWorker was made to consume the event explicitly.
Previously, the mock of do_export_realm was producing spammy output
as a result of a FileNotFound error coming from the queue processing of
`do_write_stats_file_for_realm_export`.
A unique path was created using the `LOCAL_UPLOADS_DIR` backend, similar
to the code used in `LocalUploadBackend`. The exported tarball was
copied to the directory, and an nginx url was created to serve the file
publicly.
Tweaked by tabbott to output an actual URL.
This cleans up the pattern for how we check which user is logged in
during Zulip's backend unit tests to be much more readable (replacing
the arcane session code that does this check).
test_retention.py had various issues - we opt for keeping its essence
(what should the tests do and verify), but rewriting a lot of it in
order to have more clarity in what's happening there.
We split archive_messages code into two functions: moving to archive and
cleanup. This allows cleaning up the tests - they can call
these functions directly instead of copying several lines of
archive_messages here and there in multiple tests.
test_cross_realm_messages_archiving_two_realm_expired doesn't run the
code path patched in commit 3d1aa98b2ea344fba7fbb2373a37d4cf30f53e08i,
so it can still fail. We apply the analogical change in the test as
in the cited commit.
This is probably a good idea for the production use case, since then
there's some consistency of behavior, and if we extend logging, one
knows exactly which realms were or were not executed before a logged
failure.
This fixes the nondeterministic test failures we've been seeing in CI:
if you use `-id` in that order_by, it happens consistently.
Sending PM from a hamlet(consented) to othello is a case
of sending message from a consented user to a non consented
user. This result in the generation of more than one message
files during realm export. To handle this case _export_realm
is updated.
The upload option will no longer be limited to strictly S3 uploads. This
commit serves as a preliminary step for supporting LOCAL_UPLOADS_DIR as
part of the public only export feature.
We've been seeing nondeterministic failures in this test suite in CI
that we can't reproduce locally; these print statements should help
track them down.
This is the only function in TestEmailMirrorLibrary, so we rename this
class to more appropriate TestGetMissedMessageToken, clean it up a bit
and add some extra checks to finally get email_mirror.py to 100% test
coverage.
log_and_report and its helper functions were mostly old code no longer
well adapted to how email mirror works currently, as well as having no
test coverage. We rewrite this part of the email to report errors in a
similar manner, and add tests for it. We're able to get rid of the
clunky and now useless debug_info dictionary in process message, as
log_and_report only needs the recipient email in its third argument.
Mostly rewritten by Tim Abbott to ensure it correctly implements the
desired security model.
Administrators should have access to users' real email address so that
they can contact users out-of-band.
Clients won't have access to user email addresses, and thus won't be
able to compute gravatars.
The tests for this are a bit messy, in large part because our tests
for get_events call subsections of it, rather than the main function.
This provides a clean warning and 40x error, rather than a 500, for
this corner case which is very likely user error.
The test here is awkward because we have to work around
https://github.com/zulip/zulip/issues/12362.
The `LocalUploadBackend` returns a relative URL, while the `S3UploadBackend`
returns an absolute URL. This commit switches to using `urljoin` to obtain the
absolute URL, instead of simply joining strings.
This commit also adds a small functionality change where the results of
each webhook fixture message sent is now displayed to the user.
With a small tweak by tabbott to fix a styling bug.
Fixes#12122.
Note: If you're going to send fixtures which are not JSON or of the
text/plain content type, make sure you set the correct content type
in the custom headers.
E.g. For the wordpress fixtures the "Content-Type" should be set to
"application/x-www-form-urlencoded".
This is a very old commit for #106, which has been on hiatus for a few
years. It was significantly modified by tabbott to:
* Improve coding style and variable names
* Update mypy annotations style
* Clean up the testing logic
* Update for API changes elsewhere in our system
But the actual runtime code is essentially unmodified from the
original work by Kirill.
It contains basic support for archiving Messages, UserMessages, and
Attachments with a nice test suite. It's still not usable in
production (e.g. it will probably break Reactions, SubMessages, etc.),
but upcoming commits will address that.
This commit introduces a simple field where the user can now specify custom
HTTP headers. This commit does not introduce an improved system for storing
HTTP headers as fixtures - such a change would modify both the existing unit
tests as well as this devtool.
This commit adds a new developer tool: The "integrations dev panel"
which will serve as a replacement for the send_webhook_fixture_message
management command as a way to test integrations with much greater ease.
This lets us handle directly in our tooling the user experience that
we document for exporting a realm with member consent (before, it
required unpleasant manual work).
We may be successfully able to get the page once, to get the content type, but
the server or network may go down and cause problems when fetching the page for
parsing its meta tags.
Currently, we only show previews for URLs which are HTML pages, which could
contain other media. We don't show previews for links to non-HTML pages, like
pdf documents or audio/video files. To verify that the URL posted is an HTML
page, we verify the content-type of the page, either using server headers or by
sniffing the content.
Closes#8358
We had some excessively tight rules about what characters were
allowed, which in particular prevented using `?foo=bar&baz=quux`
structures in the realm filters URLs.
Fixes#12239.
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.