webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is required for the upcoming type behavior of the "anchor"
parameter.
This change is the minimal work required to have our OpenAPI code not
fail when checking a union-type value of this form. We'll likely want
to, in the future, do something nicer, but it'd require more extensive
infrastructure for parsing of OpenAPI data that it's worth with our
current approach (we may want to switch to using a library).
Previously, we didn't track opening and closing fences separately,
with led to bugs like not parsing a list that was immediately after
a quoted fence; we treated each ``` as a new fence.
This commit rewrites the function to maintain a stack of currently
open fences. If any of the parent fences is a code fence, we do not
insert a new line before a list.
We also add some test cases specifically to test this behavior with
complexly nested lists.
Fixes#13745.
In 3892a8afd8, we restructured the
system for managing uploaded files to a much cleaner model where we
just do parsing inside bugdown.
That new model had potentially buggy handling of cases around both
relative URLs and URLS starting with `realm.host`.
We address this by further rewriting the handling of attachments to
avoid regular expressions entirely, instead relying on urllib for
parsing, and having bugdown output `path_id` values, so that there's
no need for any conversions between formats outside bugdowm.
The check_attachment_reference_change function for processing message
updates is significantly simplified in the process.
The new check on the hostname has the side effect of requiring us to
fix some previously weird/buggy test data.
Co-Author-By: Anders Kaseorg <anders@zulipchat.com>
Co-Author-By: Rohitt Vashishtha <aero31aero@gmail.com>
This closes an open redirect vulnerability, one case of which was
found by Graham Bleaney and Ibrahim Mohamed using Pysa.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previously, we would naively set has_attachment just by searching
the whole messages for strings like `/user_uploads/...`. We now
prevent running do_claim_attachments for messages that obviously
do not have an attachment in them that we previously ran.
For example: attachments in codeblocks or
attachments that otherwise do not match our link syntax.
The new implementation runs that check on only the urls that
bugdown determines should be rendered. We also refactor some
Attachment tests in test_messages to test this change.
The new method is:
1. Create a list of potential_attachment_urls in Bugdown while rendering.
2. Loop over this list in do_claim_attachments for the actual claiming.
For saving:
3. If we claimed an attachment, set message.has_attachment to True.
For updating:
3. If claimed_attachment != message.has_attachment: update has_attachment.
We do not modify the logic for 'unclaiming' attachments when editing.
add_a, add_oembed_data and add_embed are only called by
InlineInterestingLinksProcessor and this commit allows
these methods to access self.markdown object.
This commit has a side-effect that we also now allow mixed lists,
but they have different syntax from the commonmark implementation
and our marked output. For example, without the closing li tags:
Input Bugdown Marked
-------------------------------------
<ul>
- Hello <li>Hello <ul><li>Hello</ul>
+ World <li>World <ul><li>World
+ Again <li>Again <li>Again</ul>
* And <li>And <ul><li>And
* Again <li>Again <li>Again</ul>
</ul>
The bugdown render is in line with what a user in #13447 requests.
Fixes#13477.
We used to specify the securityScheme for each REST operation seperately.
This is unecessary as the securityScheme can be specified in root level
and would be automatically applied to all operations. This also prevents
us accidentally not specifying the securityScheme for some operations and
was the case for /users/me/subscriptions PATCH endpoint. The root level
securityScheme can be also overriden in the operational level when
necessary.
swagger.io/docs/specification/authentication/#security
We also switch the underlying exctact_mention_text method to use
a regular for loop, as well as make the related methods return
tuples of (names, is_wildcard). This abstraction is hidden from the
MentionData callers behind mention_data.message_has_wildcards().
Concerns #13430.
new_name and description params should be valid JSON
strings. The format of these params are marked as
json so that the curl example genenrator can convert
them into json strings.
This brings us in line, and also allows us to style these more like
unordered lists, which is visually more appealing.
On the backend, we now use the default list blockprocessor + sane list
extension of python-markdown to get proper list markup; on the
frontend, we mostly return to upstream's code as they have followed
CommonMark on this issue.
Using <ol> here necessarily removes the behaviour of not renumbering
on lists written like 3, 4, 7; hopefully users will be OK with the
change.
Fixes#12822.
Apparently, due to poor naming of the outer capture group we use to
separate the actual match from the surrounding whitespace (etc.) we
use to determine if the syntax is a possible linkifier start/end, if
you created a linkifier using "name" as the capture group, we'd try to
compile a pattern with two capture groups called "name", which would
500, preventing anyone from accessing the organization.
This commit wraps up the major work that we held back when upgrading
py-markdown 2.6.11 to 3.0.1. Since we were making our custom changes
to the link syntax, at the time we stuck to using the old method of
parsing links. This lays the groundwork for further changes to our
link and image link handling, and brings us on par with upstream.
Also, we now better document the ways in which our link handling is
different from upstream.
Our new curl example generation logic was broken, in that it hardcoded
localhost:9991 (without an HTTP method or anything) as the API URL.
It requires a bit of plumbing to make this possible.
Now we can also include extra keyword arguments to specify
modifications in how the example code should be generated
in the generate_code_example template tag.
E.g. generate_code_example(curl, exclude=["param1", "param2"])
This commit extends api_code_examples.py to support automatically
generating cURL examples from the OpenAPI documentation. This way
work won't have to be repeated and we can also drastically reduce
the chance of introducing faulty cURL examples (via. an automated
test which can now be easily created).
This gives us access to typing_extensions.Deque, which was not added
to typing until 3.5.4.
(PROVISION_VERSION is not bumped because the transitive dependency set
in dev.txt hasn’t changed.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our implementation requires at least 1 space after the
'#' not not break existing linkifiers like '#123', etc.
that generally follow the convention we show in linkifier
examples.
- [valid] : # Hello
- [valid] : # Hello
- [invalid]: #Hello
For the frontend, we have taken the code from v0.7.0 of
upstream marked and made minor changes to avoid having
to refactor a significant part of our marked code.
For the backend, we merely have to change the regex to
force require spaces after #, and add hashheader to our
list of blockparsers.
Fixes#11418.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
If a url doesn't have a scheme, browsers would treat it as a relative
url and open something like: https://chat.zulip.org/google.com instead.
This PR fixes the issue on the backend; the frontend implementation
remains out of sync and the user sending the message wouldn't see
any linkification for urls without a scheme.
Fixes#12791.
Modified by punchagan to:
* Replace URLs with titles only if the inline url embed previews are turned on
* Add a test for youtube titles replacing URLs
The titles for the videos are fetched asynchronously after the message has been
sent via the code that fetches metadata for open graph previews. So, the URLs
are replaced with titles only if the inline embed url previews feature is
enabled.
Ideally, YouTube previews should be shown only if inline url previews are
enabled, but this feature is in beta, while YouTube previews are pretty stable.
Once this feature is out of beta, YouTube previews should be shown only if the
url previews feature is turned on.
YouTube preview image is calculated as soon as the message is sent, while the
title needs to be fetched using a network request. This means that the URL is
replaced only after the data has been fetched from the request, and happens a
couple of seconds after the message has been rendered.
Closes#7549
Messages with links embedded in blockquotes turn out to be replies to
messages with links, more often than not. Showing previews for links in
replies seems like clutter, and it seems reasonable to turn off previews for
such links.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
We reuse the link regexes we use elsewhere inn markdown
for parsing links in topic names and add a button to open
them in new tabs similar to our behavior with linkifiers
in topic names.
Fixes#12391.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
When an emoji is nested inside another inline tag - like em or strong -
it was getting double processed because of the way the inlinePattern
TreeProcessor runs (it runs recursively). With this fix, we set the
inner text of the emoji span as an AtomicString, preventing us from
double processing the emoji's text.
Fixes#11621
Test Plan:
* Add test case for **😄**, verify it passes.
* Go into local dev server and send "**😄**" to self and verify the DOM
does not have double <span> tags for the emoji.
* Run zerver.tests.test_push_notifications and verify the markdown test case matches
the text_content field properly
This fixes an issue where the hanging unordered list was not
rendering in blockquote; the problem was that we were not
adding an empty line(to satisfy the markdown) for hanging
unordered list if it is in blockquote. Both blockquote
and code block is fenced but we want to avoid rendering
the list if it's in the code block but not in blockquote.
Fixes: #11916.
This is in response to a support ticket where the user had a closed left
sidebar, had added an organization, and then couldn't figure out how to
switch organizations. They had googled and found "The desktop app makes it
easy to switch between different organizations" in our help docs, which was
not sufficiently helpful.
Some urls which end with image file extensions (eg .jpg) may link to
html pages. This adds handling for linx.li, wikipedia.org and
pasteboard.co. If it is possible, we redirect to the actual image url
otherwise we do not attempt to render it as an image.
Fixes#10438.
This reverts commit ff90c0101c but keeps
the test cases added for reference.
This was reverted because it was both not a clean solution and created
other realm filters bugs involving dashes (etc.).
This fixes an issue where invalid emoji name prevents following
emojis from rendering.
This reverts the code change in
8842349629, while still passing the
tests added in that commit (it seems the original commit had
misdiagnosed an ordering bug and thus introduced this issue).
Fixes: #11770.
This commit leverages the ahocorasick algorithm to build a set of user_ids
that have their alert_words present in the message. It runs in linear time
of the order of length of the input message as opposed to number of
alert_words. This is after building a ahocorasick Automaton which runs
in O(number of alert_words in entire realm) which is usually cached.
This fixes an issue where blank lines between blocks were causing
auto-numbering of list to stop before the blank line resulting
in two separate numbered list instead of one.
Edited significantly by tabbott to explain the tricky details in the
comments.
Fixes: #11651.
This allows us to have some features using bugdown rendering where
inline image previews will not be rendered (which would be problematic
for e.g. stream descriptions).
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
This changes the border-radius to 6px for the tabbed display, which is not
in line with the current Zulip style for border-radius (4px). However 6px
really looks a lot better for this (possibly because it's a bigger box than
most of our other boxes?)
Earlier, our realm filters didn't render for languages that do not
use spaces (eg: Japanese) since we used to check for the presence
of an actual space character. This commit replaces that logic with
a complex scheme to detect word boundaries.
Also, we convert the RealmFilterPattern to subclass InlineProcessor
and make use of the new no-op feature in py-markdown 3.0.1 where we
can tell py-markdown that our pattern didn't find a match despite
the initial regex getting matched.
Fixes#9883.
Since we are building our parser from scratch now:
1. We have control over which proccessor goes at what priority number.
Thus, we have also shifted the deprecated `.add()` calls to use the
new `.register()` calls with explicit priorities, but maintaining
the original order that the old method generated.
2. We do not have to remove the processors added by py-markdown that
we do not use in Zulip; we explicitly add only the processors we
do require.
3. We can cluster the building of each type of parser in one place,
and in the order they need to be so that when we register them,
there is no need to sort the list. This also makes for a huge
improvement in the readability of the code, as all the components
of each type are registered in the same function.
These are significant performance improvements, because we save on
calls to `str.startswith` in `.add()`, all the resources taken to
generate the default to-be-removed processors and the time taken to
sort the list of processors.
Following are the profiling results for the changes made. Here, we
build 10 engines one after the other and note the time taken to build
each of them. 1st pass represents the state after this commit and 2nd
pass represent the state after some regex modifications in the commits
that follow by Steve Howell. All times are in microseconds.
| nth Engine | Old Time | 1st Pass | 2nd Pass |
| ---------- | -------- | -------- | -------- |
| 1 | 92117.0 | 81775.0 | 76710.0 |
| 2 | 1254.0 | 558.0 | 341.0 |
| 3 | 1170.0 | 472.0 | 305.0 |
| 4 | 1155.0 | 519.0 | 301.0 |
| 5 | 1170.0 | 546.0 | 326.0 |
| 6 | 1271.0 | 609.0 | 416.0 |
| 7 | 1125.0 | 459.0 | 299.0 |
| 8 | 1146.0 | 476.0 | 390.0 |
| 9 | 1274.0 | 446.0 | 301.0 |
| 10 | 1135.0 | 451.0 | 297.0 |
We avoid re-computing the regex string here, and we
also avoid re-compiling the regex itself.
I decided to put the "one_time" decorator in the
bugdown file itself, just to reduce friction in
folks reading the "buyer beware" comments.
Unfortunately, we can't use this for the
get_web_link_regex() function due to testing concerns,
so that continues to do an inelegant cache-with-global-var
scheme.
We use early-exit to flatten the code.
I also tweaked the comments a bit based on some recent
profile findings. (e.g. reading the file isn't actually
a big bottleneck, it's more the regex itself)
On the backend, we extend the BlockQuoteProcessor's clean function that
just removes '>' from the start of each line to convert each mention to
have the silent mention syntax, before UserMentionPattern is invoked.
The frontend, however, has an edge case where if you are mentioned in
some message and you quote it while having mentioned yourself above
the quoted message, you wouldn't see the red highlight till we get the
final rendered message from the backend.
This is such a subtle glitch that it's likely not worth worrying about.
Fixes#8025.
These mentions look like regular mentions except they do not
trigger any notification for the person mentioned. These are
primarily to be used when you make a bot take an action and
the bot mentions you, or when you quote a message that mentions
you.
Fixes#11221.