This fixes an issue where if you make #announce (the default
announcement stream) announce-only, then creating a new stream will
throw an exception (because notification-bot can't send there).
Fixes#9636.
These two slash commands now use zcommand to talk to
the server, so we have no Message overhead, and if you're
on a stream, you no longer spam people by accident.
The commands now also give reasonable messages
if you are already in the mode you ask for.
It should be noted that by moving these commands out of
widget.py, they are no longer behind the ALLOW_SUB_MESSAGES
setting guard.
This adds a /ping command that will be useful for users
to see what the round trip to the Zulip server is (including
only a tiny bit of actual server time to basically give a
200).
It also introduce the "/zcommand" endpoint and zcommand.js
module.
This removes a check on invite_only, that should have been a check on
history_public_to_subscribers. In addition to fixing a bug for zephyr
realms, it also makes "more topics" work correctly for realms using
the new settings for stream history being public to subscribers.
This commit adds a new field history_public_to_subscribers to the
Stream model, which serves a similar function to the old
settings.PRIVATE_STREAM_HISTORY_FOR_SUBSCRIBERS; we still use that
setting as the default value for new streams to avoid breaking
backwards-compatibility for those users before we are ready with an
actual UI for users to choose directly.
This also comes with a migration to set the value of the new field for
existing streams with an algorithm matching that used at runtime.
With significant changes by Tim Abbott.
This is an initial part of our efforts on #9232.
The removed code path was only needed due to buggy setup code in the
test_cross_realm_scenarios test. We address that with a less buggy
workaround, and which lets us remove unnecessary complexity from this
important validation function.
Thanks for Umair Waheed for some preliminary work on this.
Fixes#7561.
Add realm setting to set time limit for message deleitng.
Set default value of message_content_delete_limit_seconds
to 600 seconds(10 min).
Thanks to Shubham Dhama for rebasing and reworking this. Some final
edits also done by Tim Abbott.
Fixes#7344.
We should still short-circuit the iteration in
`add_missing_messages` if the unsubscription was the last
thing to happen to the user before unsubscription and
soft deactivation.
Applies the logic to allow community members to edit topics
of others' messages if this setting is True. Otherwise,
only administrators can update the topic of others' messages.
This logic includes a 24-hour time limit for community topic editing.
Apparently, we did essentially all the work to support showing full
topic history to newly subscribed users from a data flow perspective,
but didn't actually enable this feature by having the topic history
endpoint grant access to historical topics. This fixes that gap.
I'm not altogether happy with how the code and tests read for this
feature; the code itself has more duplication than I'd like, and the
tests do too, but it works.
In order to get test coverage on topic name checks, we
do them in Addressee, so that we don't hit an assertion
first. The assertion in question is in Addressee.topic(),
and it was added partly to appease mypy.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This test would produce a bunch of log messages with tracebacks,
complaining that `welcome-bot@zulip.com` tried to send cross-realm PMs
and can't. The issue is that the test overrides
`settings.CROSS_REALM_BOT_EMAILS`, and hasn't kept up with additions
to the normal value for that setting. Update it so that welcome-bot
is permitted as usual.
We don't have our linter checking test files due to ultra-long strings
that are often present in test output that we verify. But it's worth
at least cleaning out all the ultra-long def lines.
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
The original PR to allow generic bots to be mentioned had
some merge issues that we detected about a week after the
fact. This commit restores the logic from the original PR.
The reason we didn't detect this bug earlier is that the
merge issues didn't break any existing behavior. Instead,
they made it so that only UserMessage rows got written for
bots, but no events were being set. The part of the commit
that got lost is restored here, so now events get sent as
well.
Thanks to @derAnfaenger for reporting this and being patient
as we tracked it down.
Fixes#7140
We were using Google's diff-match-patch library to diff HTML. The
problem with that approach is that it is a text differ, not an HTML
differ and so it ends up messing up the HTML tags. `lxml` is a safer
option.
Fixes: #7219.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
Almost all callers to do_create_user were trying to
create active users, except for one test. The
active=False codepath was kind of broken (things
like sending welcome messages had sort of undefined
behavior there), so instead of trying to maintain it,
we just update the one test (`test_people`) to flip the
`is_active` flag manually.
Fixes#7197
The tighter interface prevents the need to specify
Recipient.PERSONAL (which can often be inaccurate in the
huddle case, anyway), and it prevents tests from confusingly
specifying a "subject" field for PMs.
Having send_stream_message() avoids the need to supply
Recipient.STREAM as a parameter, and it also uses the more
modern name of `topic_name` for topics. Under the hood, it
avoids some annoying steps for re-formatting the recipients,
since we just have a single stream name.
When possible, we want to use direct APIs for sending
stream messages.
This changes the codepath slightly, by not using
forwarded_user_profile, but it doesn't impact the number
of queries, and it's a simple check.
We also remove a couple "subject" references here.
This change allows normal bots to get UserMessage rows when
they are mentioned on a stream, even if they are not actually
subscribed to the stream.
Fixes#7140.
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Every time we updated a UserProfile object, we were calling
delete_display_recipient_cache(), which churns the cache and
does an extra database hop to find subscriptions. This was
due to saying `updated_fields` instead of `update_fields`.
This made us prone to cache churn for fields like UserProfile.pointer
that are fairly volatile.
Now we use the helper function changed(). To prevent the
opposite problem, we use all the fields that could invalidate
the cache.
Clients fetching messages can now specify that they are able
to compute their avatar, and if they set client_gratavar to
True in the request (w/our normal encoding scheme), then the
backend will not compute it, and the payload will be smaller.
The fix starts with get_messages_backend. The flag gets
passed down through these functions:
* MessageDict.post_process_dicts.
* MessageDict.set_sender_avatar.
We also fix up the callers for post_process_dicts to explicitly
pass in the client_gravatar path, but for now they all just hard
code the value to False.
Message.get_raw_db_rows is moved to MessageDict, since its
implementation details are highly coupled to other methods
in MessageDict.
And then sew_messages_and_reactions comes along for the
ride.
We eventually want to move Reaction.get_raw_db_rows to there
as well.
We're about to have multiple post-processing stages for building
message dictionaries. Rather than having individual "hydration"
methods remove intermediate values, we just wait until the end.
This decouples the hyrdration steps. The potentional problem
here is that we may have a field like sender_is_mirror_dummy
that isn't part of the final payload, but we need it for
calculating display recipients and avatars. We don't want to
delete it too early from the objects.
This removes sender names from the message cache, since
they aren't guaranteed to be valid, and they're inexpensive
to add.
This commit will make the message cache entries smaller
by removing sender___full_name and sender__short_name
fields.
Then we add in the sender fields to the message payloads
by doing a query against the unique sender ids of the
messages we are processing.
This change leads to 2 extra database hops for most of
our message-related codepaths. The reason there are 2 hops
instead of 1 is that we basically re-calculate way too
much data to get a no-markdown dictionary.
Introduce MessageDict.post_process_dicts() will allow us
the ability to do the following:
* use less memory in the cache for repeated data
* prevent cache invalidation
* format data according to different client needs
The first use of this function is pretty inconsequential, but
it sets us up for more consequential changes.
In this commit we defer the MessageDict.hydrate_recipient_info
step until after we pull data out of the cache. This impacts
cache size as follows:
* streams - negligibly bigger
* PMs/huddles - slimmer due to not needing to repeat
sender data like email/full_name
Again, the main point of this change is to start setting up
the infrastructure to do post-processing.
This is a first step to eventually slimming the message cache,
but there are still some moving parts there to be worked through.
The more immediate benefit of extracting this function is that
we can put tests on it. Also, it isolates some functionality
that may go away as our clients gets smarter.
This commit prepares us to introduce a StreamLite class. For
these tests, we don't care about the actual contents of the
Stream, just the right stream is there.
Postgres doesn't like them, we don't have an obvious way to escape
them, and they tend to be sent by buggy tools where it'd be better for
the user to get an error.
This fixes a 500 we were getting occasionally.
The dictionary result for get_user_info_for_message_updates()
now has a `mention_user_ids` field that is a set of user ids
who were mentioned in a message.
Previously, the bot domain was calculated correctly in most
circumstances, but if you were using the root domain, it would be
e.g. ".chat.zulip.org", not "chat.zulip.org". We fix this, with
perhaps more use of setting REALMS_HAVE_SUBDOMAINS than would be ideal
if we weren't about to set that True unconditionally.
Usually a small minority of users are eligible to receive missed
message emails or mobile notifications.
We now filter users first before hitting UserPresence to find idle
users. We also simply check for the existence of recent activity
rather than borrowing the more complicated data structures that we
use for the buddy list.