Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
If a url doesn't have a scheme, browsers would treat it as a relative
url and open something like: https://chat.zulip.org/google.com instead.
This PR fixes the issue on the backend; the frontend implementation
remains out of sync and the user sending the message wouldn't see
any linkification for urls without a scheme.
Fixes#12791.
Modified by punchagan to:
* Replace URLs with titles only if the inline url embed previews are turned on
* Add a test for youtube titles replacing URLs
The titles for the videos are fetched asynchronously after the message has been
sent via the code that fetches metadata for open graph previews. So, the URLs
are replaced with titles only if the inline embed url previews feature is
enabled.
Ideally, YouTube previews should be shown only if inline url previews are
enabled, but this feature is in beta, while YouTube previews are pretty stable.
Once this feature is out of beta, YouTube previews should be shown only if the
url previews feature is turned on.
YouTube preview image is calculated as soon as the message is sent, while the
title needs to be fetched using a network request. This means that the URL is
replaced only after the data has been fetched from the request, and happens a
couple of seconds after the message has been rendered.
Closes#7549
Messages with links embedded in blockquotes turn out to be replies to
messages with links, more often than not. Showing previews for links in
replies seems like clutter, and it seems reasonable to turn off previews for
such links.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
We reuse the link regexes we use elsewhere inn markdown
for parsing links in topic names and add a button to open
them in new tabs similar to our behavior with linkifiers
in topic names.
Fixes#12391.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
When an emoji is nested inside another inline tag - like em or strong -
it was getting double processed because of the way the inlinePattern
TreeProcessor runs (it runs recursively). With this fix, we set the
inner text of the emoji span as an AtomicString, preventing us from
double processing the emoji's text.
Fixes#11621
Test Plan:
* Add test case for **😄**, verify it passes.
* Go into local dev server and send "**😄**" to self and verify the DOM
does not have double <span> tags for the emoji.
* Run zerver.tests.test_push_notifications and verify the markdown test case matches
the text_content field properly
This fixes an issue where the hanging unordered list was not
rendering in blockquote; the problem was that we were not
adding an empty line(to satisfy the markdown) for hanging
unordered list if it is in blockquote. Both blockquote
and code block is fenced but we want to avoid rendering
the list if it's in the code block but not in blockquote.
Fixes: #11916.
Some urls which end with image file extensions (eg .jpg) may link to
html pages. This adds handling for linx.li, wikipedia.org and
pasteboard.co. If it is possible, we redirect to the actual image url
otherwise we do not attempt to render it as an image.
Fixes#10438.
This reverts commit ff90c0101c but keeps
the test cases added for reference.
This was reverted because it was both not a clean solution and created
other realm filters bugs involving dashes (etc.).
This fixes an issue where invalid emoji name prevents following
emojis from rendering.
This reverts the code change in
8842349629, while still passing the
tests added in that commit (it seems the original commit had
misdiagnosed an ordering bug and thus introduced this issue).
Fixes: #11770.
This commit leverages the ahocorasick algorithm to build a set of user_ids
that have their alert_words present in the message. It runs in linear time
of the order of length of the input message as opposed to number of
alert_words. This is after building a ahocorasick Automaton which runs
in O(number of alert_words in entire realm) which is usually cached.
This fixes an issue where blank lines between blocks were causing
auto-numbering of list to stop before the blank line resulting
in two separate numbered list instead of one.
Edited significantly by tabbott to explain the tricky details in the
comments.
Fixes: #11651.
This allows us to have some features using bugdown rendering where
inline image previews will not be rendered (which would be problematic
for e.g. stream descriptions).
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
Earlier, our realm filters didn't render for languages that do not
use spaces (eg: Japanese) since we used to check for the presence
of an actual space character. This commit replaces that logic with
a complex scheme to detect word boundaries.
Also, we convert the RealmFilterPattern to subclass InlineProcessor
and make use of the new no-op feature in py-markdown 3.0.1 where we
can tell py-markdown that our pattern didn't find a match despite
the initial regex getting matched.
Fixes#9883.
Since we are building our parser from scratch now:
1. We have control over which proccessor goes at what priority number.
Thus, we have also shifted the deprecated `.add()` calls to use the
new `.register()` calls with explicit priorities, but maintaining
the original order that the old method generated.
2. We do not have to remove the processors added by py-markdown that
we do not use in Zulip; we explicitly add only the processors we
do require.
3. We can cluster the building of each type of parser in one place,
and in the order they need to be so that when we register them,
there is no need to sort the list. This also makes for a huge
improvement in the readability of the code, as all the components
of each type are registered in the same function.
These are significant performance improvements, because we save on
calls to `str.startswith` in `.add()`, all the resources taken to
generate the default to-be-removed processors and the time taken to
sort the list of processors.
Following are the profiling results for the changes made. Here, we
build 10 engines one after the other and note the time taken to build
each of them. 1st pass represents the state after this commit and 2nd
pass represent the state after some regex modifications in the commits
that follow by Steve Howell. All times are in microseconds.
| nth Engine | Old Time | 1st Pass | 2nd Pass |
| ---------- | -------- | -------- | -------- |
| 1 | 92117.0 | 81775.0 | 76710.0 |
| 2 | 1254.0 | 558.0 | 341.0 |
| 3 | 1170.0 | 472.0 | 305.0 |
| 4 | 1155.0 | 519.0 | 301.0 |
| 5 | 1170.0 | 546.0 | 326.0 |
| 6 | 1271.0 | 609.0 | 416.0 |
| 7 | 1125.0 | 459.0 | 299.0 |
| 8 | 1146.0 | 476.0 | 390.0 |
| 9 | 1274.0 | 446.0 | 301.0 |
| 10 | 1135.0 | 451.0 | 297.0 |
We avoid re-computing the regex string here, and we
also avoid re-compiling the regex itself.
I decided to put the "one_time" decorator in the
bugdown file itself, just to reduce friction in
folks reading the "buyer beware" comments.
Unfortunately, we can't use this for the
get_web_link_regex() function due to testing concerns,
so that continues to do an inelegant cache-with-global-var
scheme.
We use early-exit to flatten the code.
I also tweaked the comments a bit based on some recent
profile findings. (e.g. reading the file isn't actually
a big bottleneck, it's more the regex itself)
On the backend, we extend the BlockQuoteProcessor's clean function that
just removes '>' from the start of each line to convert each mention to
have the silent mention syntax, before UserMentionPattern is invoked.
The frontend, however, has an edge case where if you are mentioned in
some message and you quote it while having mentioned yourself above
the quoted message, you wouldn't see the red highlight till we get the
final rendered message from the backend.
This is such a subtle glitch that it's likely not worth worrying about.
Fixes#8025.
These mentions look like regular mentions except they do not
trigger any notification for the person mentioned. These are
primarily to be used when you make a bot take an action and
the bot mentions you, or when you quote a message that mentions
you.
Fixes#11221.
This is a major upgrade, and requires some significant compatibility
work:
* Migrating the pattern-removal logic to use the Registry feature.
* Handling the removal of positional arguments in markdown extensions.
* Handling the removal of safe mode.
This setting splits away part of responsibility from THUMBOR_URL.
Now on, this setting will be responsible for controlling whether
we thumbnail images or not by asking bugdown to render image links
to hit our /thumbnail endpoint. This is irrespective of what
THUMBOR_URL is set to though ideally THUMBOR_URL should be set
to point to a running thumbor instance.
This commit changes the return type of get_possible_mentions_info to a
list instead of a dict, thus disposing off the hacky logic of storing
users with duplicate full names with name|id keys that made the code
obfuscated.
The other functions continue to use the dicts as before, however, there
are minor variable changes where needed in accordance with the updated
definition of get_possible_mentions_info.