This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
tsearch_extras returns search offsets in bytes but our highlight
function treated them as character offsets. Added a check to subtract
extra bytes if the tsearch search backend is being used.
Fixes#4084.
Fixes#7021.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Clients fetching messages can now specify that they are able
to compute their avatar, and if they set client_gratavar to
True in the request (w/our normal encoding scheme), then the
backend will not compute it, and the payload will be smaller.
The fix starts with get_messages_backend. The flag gets
passed down through these functions:
* MessageDict.post_process_dicts.
* MessageDict.set_sender_avatar.
We also fix up the callers for post_process_dicts to explicitly
pass in the client_gravatar path, but for now they all just hard
code the value to False.
Add this field to the Stream model will prevent us from having
to look at realm data for several types of stream operations, which
can be prone to either doing extra database lookups or making
our cached data bloated.
Going forward, we'll set stream.is_zephyr to True whenever the
realm's string id is "zephyr".
Instead of using `unified_reactions` mapping start using
`name_to_codepoint` mapping for converting emoji name to
codepoints. We were using `unified_reactions` mapping
because prior to emoji web PR `name_to_codepoint` mapping
was generated using emoji_map.json which contained old
codepoints but for reactions new codepoints were required
to display them using sprite sheets.
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.
This is mostly pure code extraction.
It also removes some dead code in update_muted_topic, where
were updating muted_topics spuriously before calling
do_update_muted_topic.
For filters like has:link, where the web app doesn't necessarily
want to guess whether incoming messages meet the criteria of the
filter, the server is asked to query rows that match the query.
Usually these queries are search queries, which have fields for
content_matches and subject_matches. Our logic was handling those
correctly.
Non-search queries were throwing an exception related to tuple
unpacking. Now we recognize when those fields are absent and
do the proper thing.
There are probably situations where the web app should stop hitting
this endpoint and just use its own filters. We are making the most
defensive fix first.
Fixes#6118
Before this change, server searches for both
`is:mentioned` and `is:alerted` would return all messages
where the user is specifically mentioned (but not
at-all mentions).
Now we follow the JS semantics:
is:mentioned -- all mentions, including wildcards
is:alerted -- has an alert word
Here is one relevant JS snippet:
} else if (operand === 'mentioned') {
return message.mentioned;
} else if (operand === 'alerted') {
return message.alerted;
And here you see that `mentioned` is OR'ed over both mention flags:
message.mentioned = convert_flag('mentioned') || convert_flag('wildcard_mentioned');
The `alerted` flag on the JS side is a simple mapping:
message.alerted = convert_flag('has_alert_word');
Fixes#5020
We apparently were using the default of num_before=1, not
num_before=0, which meant that if the very last randomly generated
message was one by cordelia mentioning lunch,
test_get_messages_with_search would fail because there were actually 3
matches.
We recently changed the populate_db data set to include more variable
message content, which happened to include the possibility of the word
"lunch" appearing in the test messages. This caused occasional
failures of the search tests that looked for messages containing
"lunch" starting at the beginning of time, not the beginning of the
test.
This completes the major endpoint migrations to eliminate legacy API
endpoints from Zulip.
There's a few other things that will happen naturally, so I believe
this fixes#611.
The example_user() function is specifically designed for
AARON, hamlet, cordelia, and friends, and it allows a concise
way of using their built-in user profiles. Eventually, the
widespread use of example_user() should help us with refactorings
such as moving the tests users out of the "zulip.com" realm
and deprecating get_user_profile_by_email.
This is a better solution to the problem of how _pg_re_escape should
handle the null character. There's really no good reason to have a
null character in a stream name.
textsearch based full text search doesn't match text in link tag but
PGroonga based full text search can match text in link tag.
Without this change, highlighting text in link tag generates broken
HTML.
Due to Pgroonga regression, there is a difference in search
result between Travis and development env due to which one of
our tests fails. This commit makes sure that the test passes
for both strings till the Pgroonga bug is resolved.
Like many rare-case code with new tests, it turns out that the logic
for handling null characters in our Zephyr postgres query escaping
never worked, in multiple ways. First, it always changed the second
character in s, not the current one being inspected, and second, the
value it replaced it with was no the correct postgres escape of the
null byte. We fix this and add tests.
This completes the effort to get zerver/views/messages.py to 100%
test coverage.
Fixes#1006.
This arguably regresses the Zephyr experience, in that we no longer
consider 'foo.d.d.d.d.d' to be something that gets narrowed in with
the rest, but that's a pretty rare use case anyway.
In practice, using that many '.d's anyway only happens a few times a
year.
Finishes the refactoring started in c1bbd8d. The goal of the refactoring is
to change the argument to get_realm from a Realm.domain to a
Realm.string_id. The steps were
* Add a new function, get_realm_by_string_id.
* Change all calls to get_realm to use get_realm_by_string_id instead.
* Remove get_realm.
* (This commit) Rename get_realm_by_string_id to get_realm.
Part of a larger migration to remove the Realm.domain field entirely.
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
This new helper combines two old helpers, one of which was misnamed
and the other of which was always called after the first, so it
made sense to just combine the helpers.
Fixes: #1386
This commit adds these two tests:
test_use_first_unread_anchor_with_some_unread_messages
test_use_first_unread_anchor_with_no_unread_messages
The new tests add coverage to the conditional logic in
get_old_messages_backend() that looks at first_unread_result
when use_first_unread_anchor is set to True.