revoke_invites_generated_by_user should send invites_changed event if it
actually revokes some invitations. This is called in the user
deactivatoin codepath.
Event of type "realm_user", op "remove", emitted by do_deactivate_user
should remove the user id from subscriptions in the state. We weren't
catching this bug, because test_do_deactivate_bot uses a newly created
bot, so no stream subscriptions are affected. The bug shows up if
deactivating e.g. cordelia - thus we want to have two tests instead,
one for testing bot deactivation and one for user deactivation.
We now use recipient_id % 24 for new stream colors
when users have already used all 24 of our canned
colors.
This fix doesn't address the scenario that somebody
dislikes one of our current canned colors, so if a
user continually changes canned color N to some other
color for new streams, their new streams will continue
to include color N (and the user will still need to
change them).
This fix doesn't address the fact that it can be expensive
during bulk-add situations to query for all the colors
that users have already used up.
See https://chat.zulip.org/#narrow/stream/3-backend/topic/assigning.20stream.20colors
for more discussion.
The limit here is purely to prevent breakage in case of a pathological
number of images in a single message; 5 images is entirely possible in
a reasonable message, and causes user confusion when they are not
expended.
Increase the limit to 10 per message.
django.utils.translation.ugettext is a deprecated alias of
django.utils.translation.gettext as of Django 3.0, and will be removed
in Django 4.0.
Commit e7ed907cf6 (#18174) fixed this
before, but new instances have been added.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes “DeprecationWarning: 'jinja2.Markup' is deprecated and will be
removed in Jinja 3.1. Import 'markupsafe.Markup' instead.”
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The subscriber list was not updating without a refresh on
reactivating user, because the subscriptions data with the
client was not updated on reactivation.
This commit adds code to send peer_add subscription events
on reactivating the user.
We do not send peer_remove events on deactivating the user,
but the subscriber list is still live-updated because we
have the data of the streams which the deactivated user is
susbcribed to and the clients itself updates the data and UI
on receiving event of deactivation of user, which it is not
possible when reactivating the user.
Fixes#20383.
Leaving old invitations valid, potentially for a very long time, is
clearly unexpected and undesired behavior under normal circumstances. A
user shouldn't be able to e.g. generate a multiuse invite link, get
banned from the organization by being deactivated and then just re-join
using the link they've created for themselves.
do_revoke_user_invite and do_revoke_multi_use_invite were using objects
after their deletion to pass the argument to notify_invites_changed. We
should avoid that. The function was only using the .realm attribute of
the received objects, so it's simpler to make it just take realm as its
argument.
Under the unicodedata distributed with Python 3.6, some Emoji are
classified as `Cn`, and not `So`:
```
$ unicode 1f929 --long
U+1F929 GRINNING FACE WITH STAR EYES
UTF-8: f0 9f a4 a9 UTF-16BE: d83edd29 Decimal: 🤩 Octal: \0374451
🤩
Category: So (Symbol, Other); East Asian width: W (wide)
Unicode block: 1F900..1F9FF; Supplemental Symbols and Pictographs
Bidi: ON (Other Neutrals)
$ python3.6 -c 'import unicodedata; print(unicodedata.category("\U0001f929"))'
Cn
$ python3.7 -c 'import unicodedata; print(unicodedata.category("\U0001f929"))'
So
```
Drop `Cn` from the list of excluded Unicode character classes, and
replace it with an explicit list of the 66 non-characters, which are
invariant.
Co-authored-by: Shlok Patel <shlokcpatel2001@gmail.com>
An explanatory note on the changes in zulip.yaml and
curl_param_value_generators is warranted here. In our automated
tests for our curl examples, the test for the API endpoint that
changes the posting permissions of a stream comes before our
existing curl test for adding message reactions.
Since there is an extra notification message due to the change in
posting permissions, the message IDs used in tests that come after
need to be incremented by 1.
This is a part of #20289.
Otherwise the dummy user can be created with an invalid email domain -
e.g. in development environment with the domain
"@http://localhost:9991". get_fake_email_domain exists exactly for
handling these kinds of scenarios.
Stop using `access_user_group_by_id` in notifications codepaths, as it
is meant to be used to check for _write_ access, not read
access (which is not limited). In the notification codepaths, there
are no ACLs to apply, and the ID is known-good; just load it
directly. The `for_mention` flag is removed, as it was not used in the
mention codepaths at all, only the notification ones.
This replaces the temporary (and testless) fix in
24b1439e93 with a more permanent
fix.
Instead of checking if the user is a bot just before
sending the notifications, we now just don't enqueue
notifications for bots. This is done by sending a list
of bot IDs to the event_queue code, just like other
lists which are used for creating NotificationData objects.
Credit @andersk for the test code in `test_notification_data.py`.
This diff looks slightly noisy, but the main chunk of
code that we moved here has the same logic as before,
and it just gets realm_id from MentionBackend now, instead
of having our markdown processor have to supply it.
We basically want MentionData to be the gatekeeper of
mention data, and then we delegate backend tasks to
MentionBackend.
Soon we will add a cache to MentionBacked, which will
justify this change a bit more.
It's slightly annoying to plumb Optional[MentionBackend]
down the stack, but it's a one-time change.
I tried to make the cache code relatively unobtrusive
for the single-message use case.
We should be able to eliminate redundant stream queries
using similar techniques.
I considered caching at the level of rendering the message
itself, but this involves nearly as much plumbing, and
you have to account for the fact that several users on
your realm may have distinct default languages (French,
Spanish, Russian, etc.), so you would not eliminate as
many query hops. Also, if multiple streams were involved,
users would get slightly different messages based on
their prior subscriptions.
When our handlers specifically reference self.md.zulip_db_data,
we now use an explicit type.
We probably want a more robust solution here, such as a semgrep
rule.
We now serialize still_url as None for non-animated emojis,
instead of omitting the field. The webapp does proper checks
for falsiness here. The mobile app does not yet use the field
(to my knowledge).
We bump the API version here. More discussion here:
https://chat.zulip.org/#narrow/stream/378-api-design/topic/still_url/near/1302573
Appending to bytes in a loop leads to a quadratic slowdown since
Python doesn’t optimize this for bytes like it does for str.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While accepting an invitation from a user, there was no condition in
place to check if the user sending the invitation was now
now-deactivated.
Skip sending notifications about newly-joined users to users who are
now disabled.
Fixes#18569.
We don't have to go to the database to get the Recipient
fields for `user_profile.recipient`.
See also 85ed6f332a from a little
over a year ago--it's very similar.
The bug here probably didn't come up too much in
practice, but if we were adding a user to multiple
streams when they already had used all N available
colors, all the new streams would be assigned the same
color, since the size of used_colors would stay at N,
thwarting our little modulo-len hackery.
It's not a terrible bug, since users can obviously
customize their stream colors as they see fit.
Usually when we are adding a user to multiple streams,
the users are fairly new, and thus don't have many
existing streams, so I have never heard this bug
reported in the field.
Anyway, assigning the colors in bulk seems to make more
sense, and I added some tests.
For the situations where all the colors have already
been used, I didn't put a ton of thought into exactly
which repeated colors we want to choose; instead, I
just ensure they're different modulo 24. It's possible
that we should just have more than 24 canned colors, or
we should just assign the same default color every time
and let users change it themselves (once they've gone
beyond the 24, to be clear). Or maybe we can just do
something smarter here. I don't have enough time for a
deep dive on this issue.
Part of our codepath for subscribing users involves
fetching the users' existing subscriptions to make sure
we can do things like properly report to the clients
that the users were already subscribed. This codepath
used to be coupled to code that helped users maintain
unique stream colors.
Suppose you are creating a new stream, and you are
importing users from an older stream with 15k
subscribers, and each of your users is subscribed to
about 20 streams.
The prior code, instead of filtering on recipient_id,
would literally look at every subscription for every
user, which was kind of crazy if you didn't understand
the pick-stream-color complications.
Before this commit, we would fetch 300k rows with 15
columns each (granted, all but one of the columns are
bool/int). That's a total of 4.5 million tiny objects
that we had to glom into Django ORM objects and slice
and dice.
After this commit, we would fetch exactly zero rows
for the are-they-already-subscribed logic.
Yes, ZERO.
If we were to mistakenly try to re-add the same 15k
subscribers to the new stream (under the new code), we
will now fetch 15k Sub rows instead of 300k.
It is worth looking at the prior commit. We go through
great pains to ensure that users get new stream colors
when we invite them to a stream, and we still fetch a
bunch of data for that. Instead of 4.5 million cells,
it's more like 600k cells (2 columns per row), and it's
less than that insofar as some users may only
have 24 distinct colors among their many streams.
It's a lot of work.
This commit sets us up for the next commit, which will
save us a very expensive query.
If you are adding 15k users to a stream, and each user
has about 20 existing streams, then we need to retrieve
300k rows from the database to figure out which stream
colors they already have. We don't need all the extra
fields from Subscription, so now we get just the two
values we need for making a color map.
In the next commit we'll eliminate the other use case
for the big query, and I will explain in greater
depth how splitting out the color-picking code can
be a huge win. It is possible that some product decisions
could make this codepath easier. We could also do some
engineering specific to stream colors, such as caching
which colors users have already used.
This does cost us an extra round trip to the database.
Having the `wildcard_mentions_notify` setting turned on does
not necessarily mean that the user will receive notification
for that message. There is more nuance to this, as explained
in the updated comment.
We recently ran into a payload in production that didn't contain
an event type at all. A payload where we can't figure out the event
type is quite rare. Instead of letting these payloads run amok, we
should raise a more informative exception for such unusual payloads.
If we encounter too many of these, then we can choose to conduct a
deeper investigation on a case-by-case basis.
With some changes by Tim Abbott.
We avoid repeating the same calculations over and
over again for the same stream.
This helps, but the real bottleneck in this function
is that send_event usually takes at least a millisecond,
and that adds up quickly if you're doing something
like subscribing 5k users to a new stream.
GIF files can be `.GIF`, and also we determine the file format by
inspecting the image data, so there's no reason to have this
assertion.
(The code for serving still images does not rely on the file being a
GIF.)
Have kept process_new_human_user out of
the atomic block because it involves many
different operations and also sends events.
Tried enclosing event in on_commit but that
would need many changes in the tests, so have
skipped it for now.
Updates testing helpers in `event_schema.py` for `do_update_message` so
that all stream message fields are present in any edits / updates to
stream messages. Adds verfication tests of events returned from private
message edits and from stream message content-only and topic-only edits.
Updates the `update_message` event type to always include a `stream_id`
field when the message being edited is a stream message. This change
aligns with the current definition of the `\get-events` endpoint
in the OpenAPI documentation.
It is better to press on, than stop halfway through due to a user
whose email no longer works. The exception is already logged, which
is sufficient here, as this is generally run interactively.
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
While races here are unlikely, it is most correct to enforce this
invariant at the database layer, and having a database-level
constraint makes the models file a bit more readable.
These are not considered to be "personal"
info, even if you upload them, so we
don't export them.
Generally the only folks who upload
these are admins, who can easily get
them in other ways. In fact, anybody
can get these via the app.
We now ensure that all message ids are sorted BEFORE
we split them into batches.
We now do a few extra "slim" queries to get message
ids up front.
But, now, when we divide them into batches, we no
longer run 2 or 3 different complicated queries in
a loop. We just basically hydrate our message ids,
so `write_message_partials` should be easy to reason
about.
This change also means that for tiny realms with
< 1000 messages you will always have just one
json file, since we aggregate the ids from the
queries before batching.
This accomplishes a few things:
* It extracts `chunkify` rather than having us
clumsily track chunking-related stuff in a
big loop that is doing other stuff.
* It makes it so that all message ids
in message-000001.json < message-000002.json.
* It makes it easier for us to customize
the messages we send to a single user
(coming soon).
BTW we probably have a slicker version of chunkify
somewhere in our codebase, but I couldn't remember
where.
Now all file writes go through our three
helper functions, and we consistently
write a single log message after the file
gets written.
I killed off write_message_exports, since
all but one of its callers can call
write_table_data, which automatically
sorts data. In particular, our Message
and UserMessage data will now be sorted
by ids.
This probably just postpones the list creation until
Django builds the "IN" query, but semantically it's
good to work in sets where we don't have any
meaningful ordering of the list that gets used.
The immediate benefit of this is stronger mypy
checks (avoiding the ugly union caused by message
files).
The subsequent commit will add sorting.
We have test coverage on all these lines insofar
as if you comment out the lines, tests will
explode (i.e. more than superficial line
coverage).
The distinction here wasn't super meaningful
due to the way we order our "elif" statements,
but we want to reserver "normal_parent" for the
majority of use cases, where you simply tell
the Config what the "foreign_key" is.
For realm-wide exports, there is no reason to query
inefficiently against a list of modified users.
We move the Config out of the common child configs.
Even though Django usually treats foo__in
and foo_id__in identically for filters where
foo is a ForeignKey type, we want to insist
on somewhat more consistent syntax, because
we have the odd combo of type and type_id
in Recipient, where type_id is kinda like a
foreign key, but not a ForeignKey.
So we assert for now that all our include_rows
values end in "_id__in".
Zulip shows two guides on How to reply, first one by
the welcome bot and second one is intro_reply hotspot.
To simply and avoid redundancy, intro_reply hotspot is
removed.
Fixes#20482.
In many of our stream notification messages, we make use of the
same silent user mention syntax, the template for which was always
hardcoded. This commit adds a helper function that all relevant
callers can call to get the right syntax when mentioning users.
Thanks to Tim Abbott for this suggestion!
We don't have automated test coverage on this yet,
but below are the results from manual testing.
Note that we include the realm icon and logo even
though they were not created by Cordelia.
./manage.py export_single_user cordelia@zulip.com
$ (cd /tmp/zulip-export-4v3mo802/ && find .)
.
./emoji
./emoji/2
./emoji/2/emoji
./emoji/2/emoji/images
./emoji/2/emoji/images/3.jpg
./emoji/records.json
./messages-000001.json
./realm_icons
./realm_icons/2
./realm_icons/2/night_logo.original
./realm_icons/2/night_logo.png
./realm_icons/2/icon.png
./realm_icons/2/icon.original
./realm_icons/records.json
./avatars
./avatars/2
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.original
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.png
./avatars/records.json
./uploads
./uploads/2
./uploads/2/68
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt/denver.jpg
./uploads/2/96
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim/denver.jpg
./uploads/records.json
./user.json
There are tactical reasons to remove this assertion.
Basically, the reason it's safe to remove is that it's
been around a long time and we would have seen this
operationally. Also, the check to make sure that the
S3 filename thingy matches the avatar hash is a much
stronger check.
We will soon restore a stronger version of this check
that applies to all of our asset types (emojis/avatars/etc.).
This makes it easier to read the calling code and see
the big picture of how the four asset types are
organized.
I also handle uploads first, to be similar to the local
code.
This code is well tested--you can modify any of the callers
to pass in a wrong value of `object_key` and get a failing
test.
"help" command occurs in the command list in
initial pms or when bot doesn't understand the message. It doesn't
occur when the bot is respoding to the "help" command itself.
This commit adds code to check whether a user is allowed to use
wildcard mention in a large stream or not while editing a message
based on the realm settings.
Previously this was only checked while sending message, thus user
was easily able to use wildcard mention by first sending a normal
message and then using a wildcard mention by editing it.
1. The initial welcome message now contains less detail.
2. The bot now responds to these commands: "apps", "edit profile",
"dark mode", "light mode", "streams", "topics", "message formatting",
"keyboard shortcuts" and "help" - the bot still responds if there are
slight variations in these commands.
3. Tests have been made to check if bot responds to the advertised
commands (with variations) and gives a negative message if it doesn't
understand the message.
With substantial tweaks by tabbott.
Fixes#19900.
django-scim2 doesn't order the rows when fetching them in reponse to a
query using the filter syntax. We ensure that ORDER BY id is always
appended to the SQL queries.
We add the following tables to the user export:
AlertWord
CustomProfileFieldValue
RealmAuditLog
Service
UserActivity
UserActivityInterval
UserCount
UserGroup
UserHotspot
UserPresence
UserTopic
Except for UserCount, we achieve this by sharing
code with the realm export via
add_user_profile_child_configs.
UserCount is handled slightly differently than realm
exports due to which key we trigger off.
It's possible that RealmAuditLog is incomplete for
single users, since we may also want rows where they
are the acting_user. This commit finds rows where
they are the modified_user. For non-admins I believe
it's rarely the case that they are the actor, and
they will tend to be the modified user if the two
fields are different at all. For admins it's
arguable we want to see both changes they enacted
as well as changes that affected them.
Special characters, including `\r`, `\n`, and more esoteric codepoints
like non-characters, can negatively affect rendering and UI behaviour.
Check for, and prevent making new messages with, characters in the
Unicode categories of `Cc` (control characters), `Cs`, (surrogates),
and `Cn` (unassigned, non-characters).
Fixes#20128.
This commit replaces "dark mode" and "light mode" with "dark theme"
and "light theme" in the message returned and shown in a little
popup in the UI, when color scheme settings are changed through
slash commands.
Since spectators can't access personal profile settings and
can't view profile for other users. Hence, we don't send realm
custom profile field data and user's profile data to spectators.
Fixes#20301.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
If null is a potential value of data type for a return value or
parameter in the API endpoint, then it is rendered as an option.
This currently relies on the 'nullable' setting in the OpenAPI spec
that was removed in the 3.1.0 release. If/when the OpenAPI version
is updated, then how the `data_type` for parameters and return values
is rendered will need to be reworked.
Fixes#20264.
RabbitMQ clients have a setting called prefetch[1], which controls how
many un-acknowledged events the server forwards to the local queue in
the client. The default is 0; this means that when clients first
connect, the server must send them every message in the queue.
This itself may cause unbounded memory usage in the client, but also
has other detrimental effects. While the client is attempting to
process the head of the queue, it may be unable to read from the TCP
socket at the rate that the server is sending to it -- filling the TCP
buffers, and causing the server's writes to block. If the server
blocks for more than 30 seconds, it times out the send, and closes the
connection with:
```
closing AMQP connection <0.30902.126> (127.0.0.1:53870 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
```
This is https://github.com/pika/pika/issues/753#issuecomment-318119222.
Set a prefetch limit of 100 messages, or the batch size, to better
handle queues which start with large numbers of outstanding events.
Setting prefetch=1 causes significant performance degradation in the
no-op queue worker, to 30% of the prefetch=0 performance. Setting
prefetch=100 achieves 90% of the prefetch=0 performance, and higher
values offer only minor gains above that. For batch workers, their
performance is not notably degraded by prefetch equal to their batch
size, and they cannot function on smaller prefetches than their batch
size.
We also set a 100-count prefetch on Tornado workers, as they are
potentially susceptible to the same effect.
[1] https://www.rabbitmq.com/confirms.html#channel-qos-prefetch
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream. The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.
Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row. This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
For export realm following changes have been made:
- `./manage.py export --upload` would delete `.tar.gz` and unpacked dir
- `./manage.py export` would only delete `unpacked dir`
Besides, we have removed `--delete-after-upload` as we have set it as
the default.
Fixes#20081
If realm is web_public, spectators can now view avatar of other
users.
There is a special exception we had to introduce in rest model to
allow `/avatar` type of urls for `anonymous` access, because they
don't have the /api/v1 prefix.
Fixes#19838.
This commit updates the error message returned when the maximum
invite limit for the day. We update the error returned by API to
only mention that the limit is reached and add the suggestion
to use multi-use link or contact support in the message shown
in webapp.
We create RealmUserDefault object for internal realm just
for consistency. The code in migration does so but it
was missed to add the code when creating new internal realm.
Not proxying these requests through camo is a security concern.
Furthermore, on the desktop client, any embed image which is hosted on
a server with an expired or otherwise invalid certificate will trigger
a blocking modal window with no clear source and a confusing error
message; see zulip/zulip-desktop#1119.
Rewrite all `message_embed_image` URLs through camo, if it is enabled.
Supporting URL percent-encoded bytes is possible using `%%20`, but this
is not necessarily very understandable to end-users, even those that
understand percent encoding.
Allow `%20` in linkifier URL format strings, and transform them into
`%%20` in the pattern just before they are applied in markdown
translation. Care must be taken here, such that already-escaped `%`s
are not escaped an extra time.
We do this before rendering, and not before storage, as
a simplification; the JS-side linkifier at present only understands
`%(foo)s` and thus needs no changes, and to avoid an un-escaping pass
before showing in the admin UI.
og:image is supposed to be an absolute URL, but some sites incorrectly
provide a relative URL. In this case, it makes more sense to
interpret it relative to the full page URL after redirects, rather
than relative to just the domain part of the page URL before
redirects.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Removes the `/day` and `/night` options from the typeahead menu while
still allowing the commands to be used. Typing `/day` and `/night`
will now suggest `/light` and `/dark`, respectively. Also changes the
`Dark mode` and `Light mode` popups that appear after using the
corresponding command.
Fixes#18318.
This makes logging more consistent between FCM and APNs codepaths, and
makes clear which user-ids are for local users, and which are opaque
integers namespaced from some remote zulip server.
Being able to determine how many distinct users are getting push
notifications per remote host is useful, as is the distribution of
device counts. This parallels the log line in
handle_push_notification for push notifications from local realms,
handled via the event queue.
It is confusing to have the plan type constants not be namespaced
by the thing they represent. We already have a namespacing
convention in place for constants, so we should use it for
Realm.plan_type as well.
`rendered_content` in historical messages may be empty; examining the
history of them may thus require diff'ing two empty strings, which
itself produces an empty string.
Use `lxml.html.fragment_fromstring` to be able to successfully parse
these, rather than 500.
Part of #19559.
As detailed in the comments, the default behavior is undesirable for us
because we can't really predict all possibilities of exceptions that may
be raised - and thus putting str(e) in the http response is potentially
insecure as it may leak some unexpected sensitive information that was
in the exception.
As a hypothetical example - KeyError resulting from some buggy
some_dict[secret_string] call would leak information. Though of course
we aim to never write code like that.