The previous iteration still had the failure mode of not actually
testing anything, because it didn't trigger the data export code path
(and in fact was getting an HTTP 401 authentication denied error).
This test was broken due to using an empty `RealmAuditLog`
table. We fix this by mocking the creation of an export,
thus creating an entry, similar to what we do in our other
tests.
This feature is intended to cover all of our ways of exporting a
realm, not just the initial "public export" feature, so we should name
things appropriately for that goal.
Additionally, we don't want to include data exports in page_params;
the original implementation was actually buggy and would have.
This replaces the two custom Google authentication backends originally
written in 2012 with using the shared python-social-auth codebase that
we already use for the GitHub authentication backend. These are:
* GoogleMobileOauth2Backend, the ancient code path for mobile
authentication last used by the EOL original Zulip Android app.
* The `finish_google_oauth2` code path in zerver/views/auth.py, which
was the webapp (and modern mobile app) Google authentication code
path.
This change doesn't fix any known bugs; its main benefit is that we
get to remove hundreds of lines of security-sensitive semi-duplicated
code, replacing it with a widely trusted, high quality third-party
library.
This commit adds a new setting to the user's notification settings that
will change the behaviour of the unread count in the title bar and
desktop application.
When enabled, the title bar will show the count of unread private messages
and mentions. When disabled, the title bar will act as before, showing
the total number of unread messages.
Fixes#1736.
Namely, here we add the "plan_includes_wide_organization_logo" and
"upgrade_text_for_wide_organization_logo" to the page_params (which
is set in zerver/lib/events.py).
"plan_includes_wide_organization_logo" is True if the plan is not of
the Realm.LIMITED type. We need to add this extra boolean parameter
instead of just using "realm_plan_type" to make things a lot easier
to work with on the frontend side, especially considering that
handlebars won't allow checking for equality in its {{#if}} blocks.
When a realm's plan type is updated using "do_change_plan_type" we
notify active users of the realm. This way certain plan features
could be enabled instantaneously for active users.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
Modifies the dict with the user info to include the key `bot_owner_id`
so it can be displayed in the user info popover.
Tests concerned with changing bot owner have been modified to have
number of events=2 because while updating the bot info, two events
are fired -- updating the `realm_bot` and `realm_user` since the
key `bot_owner_id` is a part of realm user info.
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.
This commit migrates the Subscription's notification fields from a
BooleanField to a NullBooleanField where a value of None means to
inherit the value from user's profile.
Also includes a migrations to set the corresponding settings to None
if they match the user profile's values. This migration helps us in
getting rid of the weird "Apply to all" widget that we offered on
subscription settings page.
The mobile apps can't handle None appearing as the stream-level
notification settings, so for backwards-compatibility we arrange to
only send True/False to the mobile apps by applying those defaults
server-side. We introduce a notification_settings_null value within a
client_capabilities structure that newer versions of the mobile apps
can use to request the new model.
This mobile compatibility code is pretty effectively tested by the
existing test_events tests for the subscriptions subsystem.
This commit replaces the `create_stream_by_admins_only` setting with a
new `create_stream_policy` setting, which mirroring the structure of
the existing `invite_to_stream_policy`.
This is important preparation for migrating the waiting period feature
to be its own independent setting.
Fixes#12236.
This commit creates a new organization setting that determines whether
a user can invite other users to streams. Previously this was linked
to the waiting period threshold, but this was both not documented and
overly limiting.
With significant tweaks by tabbott to change the database model to not
involve two threshhold fields, edit the tests, etc.
This requires follow-up work to make the create stream policy setting
work how this code implies it should.
Fixes#12042.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
This is important because upcoming features will include slightly more
complex logic in post_process_state that we'd ideally like to be
included in what this suite tests.
This requires a few related changes:
* A small change to post_process_state to sort the realm_users objects
by user_id to ensure those data structures are stable.
* Improvements to the logic for checking if the initial state has
changed to use match_states for better output.
This adds experimental support in /register for sending key
statistical data on the last 1000 private messages that the user is a
participant in. Because it's experimental, we require developers to
request it explicitly in production (we don't use these data yet in
the webapp, and it likely carries some perf cost).
We expect this to be extremely helpful in initializing the mobile app
user experience for showing recent private message conversations.
See the code comments, but this has been heavily optimized to be very
efficient and do all the filtering work at the database layer so that
we minimize network transit with the database.
Fixes#11944.
There were several problems with the old format:
* The sender was not necessarily the sender; it was the person who did
the deletion (which could be an organization administrator)
* It didn't include the ID of the sender, just the email address.
* It didn't include the recipient ID, instead having a semi-malformed
recipient_type_id under the weird name recipient_user_ids.
Since nothing was relying on the old behavior, we can just fix the
event structure.
This field is primarily intended to support avoiding displaying the
"more topics" feature in new organizations and streams, where we might
know that all messages in the stream are already available in the
browser.
Based on original work by Roman Godov, and significantly modified by
tabbott.
The second migration involved here could be expensive on Zulip Cloud,
but is unlikely to be an issue on other servers.
The test_events system was in several tests using get_realm to fetch a
realm object, rather than accessing self.user_profile.realm. This
created subtle problems where we were neither directly editing nor
refreshing the `realm` object associated with our UserProfile object
from the database after our the `do_*` methods.
The payoff for this is we can update the previously confused
`do_change_icon_source` test to actually change the state and have the
correct result.
Guest users will just get an empty list of default streams; we also
hide the "Default streams" organization view from the guest users UI.
This is for consistency with not providing guest users the full list
of streams in an organization.
Fixing this involves fixing the backend to handle unchanged field
submissions of the Zoom credentials without trying to re-validate the
credentials (for performance) as well as to fetch the already-sent
secret.
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
Add all the stop words to page_params, reading from the
`zulip_english.stop` database, with caching to avoid loading the file
on every page load.
Part of #10592.
This causes changing the email_address_visibility field to actually
modify what user_profile.email values are generated for users, both on
user creation and afterwards as email addresses are edited.
The overall feature isn't yet complete, but this brings us pretty close.
This commit does the following three things:
1. Update stream model to accomodate rendered description.
2. Render and save the stream rendered description on update.
3. Render and save stream descriptions on creation.
Further, the stream's rendered description is also sent whenever the
stream's description is being sent.
This is preparatory work for eliminating the use of the
non-authoritative marked.js markdown parser for stream descriptions.
You can now pass in an info field with a value
like "out to lunch" to the /users/me/status,
and the server will include that in its outbound
events.
The semantics here are that both "away" and
"status_text" have to have defined values in order
to cause changes. You can omit the keys or
pass in None when values don't change.
The way you clear info is to pass the empty
string.
We also change page_params to have a dictionary
called "user_status" instead of a set of user
ids. This requires a few small changes on the
frontend. (We will add "status_text" support in
subsequent commits; the changes here just keep
the "away" feature working correctly.)
We now have single function that handle both away
and not-away.
This refactoring sets us up to piggyback "info" more
easily onto status updates.
The only thing that changes here is that we don't
delete database rows any more when users revoke
their away status. Instead we just set the status
to NORMAL.
When I was initially writing the tests to solve issue #10131 in PR
2 schema checkers as I modified the code to send the rendered_value
only when required.
When I was using just 1 schema checker shared between two code paths,
we needed _allow_only_listed_keys. But after shifting to 2 schema
checkers for the two different cases, we no longer needed that flag,
and it's better to remove it for a stronger check.
Feature of sending notification to the stream using notification bot
is added. user_profile is also passed to do_rename_stream for using
the name of user who renamed the stream in notification.
Notification is sent to the stream using
internal_send_stream_message in do_rename_stream.
Fixes#11034.
This makes it possible it include our standard markdown formatting in
one's custom profile fields, allowing for links, emphasis, emoji, etc.
Fixes#10131.
Also, add a new notification sound, "ding". It comes from
https://freesound.org, where the original Zulip notification sound comes
from as well. In the future, new sounds can be added by adding audio
files to the `static/audio/notification_sounds` directory.
Tweaked significantly by tabbott:
* Avoided removing static/audio/zulip.ogg, because that file is
checked for by old versions of the desktop app.
* Added a views check for the sound being valid + tests.
* Added additional tests.
* Restructured the test_events test to be cleaner.
* Removed check_bool_or_string.
* Increased max length of notification_sound.
* Provide available_notification_sounds in events data set if global
notifications settings are requested.
Fixes#8051.
This is preparatory work for settings controlling who can see user
emails; it includes the API-level support for editing it, but no code
to actually enforce the policy.
This is the first step of letting users use Zulip markdown in their
SHORT_TEXT and LONG_TEXT custom profile fields, so that they can
include emphasis, links, etc.
This doesn't include any frontend logic yet, however.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
This supports guest user in the user-info-form-modal as well as in the
role section of the admin-user-table.
With some fixes by Tim Abbott and Shubham Dhama.
Bots are not allowed to use the same name as
other users in the realm (either bot or human).
This is kind of a big commit, but I wanted to
combine the post/patch (aka add/edit) checks
into one commit, since it's a change in policy
that affects both codepaths.
A lot of the noise is in tests. We had good
coverage on the previous code, including some places
like event testing where we were expediently
not bothering to use different names for
different bots in some longer tests. And then
of course I test some new scenarios that are relevant
with the new policy.
There are two new functions:
check_bot_name_available:
very simple Django query
check_change_bot_full_name:
this diverges from the 3-line
check_change_full_name, where the latter
is still used for the "humans" use case
And then we just call those in appropriate places.
Note that there is still a loophole here
where you can get two bots with the same
name if you reactivate a bot named Fred
that was inactive when the second bot named
Fred was created. Also, we don't attempt
to fix historical data. So this commit
shouldn't be considered any kind of lockdown,
it's just meant to help people from
inadvertently creating two bots of the same
name where they don't intend to. For more
context, we are continuing to allow two
human users in the same realm to have the
same full name, and our code should generally
be tolerant of that possibility. (A good
example is our new mention syntax, which disambiguates
same-named people using ids.)
It's also worth noting that our web app client
doesn't try to scrub full_name from its payload in
situations where the user has actually only modified other
fields in the "Edit bot" UI. Starting here
we just handle this on the server, since it's
easy to fix there, and even if we fixed it in the web
app, there's no guarantee that other clients won't be
just as brute force. It wasn't exactly broken before,
but we'd needlessly write rows to audit tables.
Fixes#10509
In user type custom field, field value is list of user ids. We weren't
converting list to json object in update event payload. This throws
error in frontend, cause we store stringify representation of custom
field value. Therefore, after update event is recieved field-value-
type gets updated to array from string which throws json parsing error.
Following recent testing flakes that were traced down to this not
having been called causing `receiver_is_off_zulip` to depend on test
ordering, it makes sense to centralize this.
I think it should always have been in ZulipTestCase; it appears the
reason it wasn't from the beginning was that originally only
test_events.py interacted with it, and do_test there still needs to
call this directly (because it can be called multiple times within a
single test). And then we did the wrong thing as expanded use of
Tornado event_queue code in tests to more of the codebase.
This prevents these unit tests from accidentally leaking data outside
their boundaries.
Verified using a test that fails after test_events without this change.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.
When last user(only in case of admin) unsubscribe from private stream,
stream page doesn't get updated. Cause we delete the private stream
as soon as last user unsubscribe from stream.
So `sub` get undefined in frontend, cause that stream is deleted
before unsubscribe-user-from-stream event is received.
Fix this by changing order of events sent to frontend. Event
`subscription: remove` should be sent before `stream: delete` event
from backend.
We were getting event-handling exceptions in JS in production if a new
user was created and then went and set a custom profile field, because
there was no `.profile_data` on their user object. We were able to
trace the issue down to the fact that our events didn't include that
field when creating a new user.
Fixes#7665
In case of invitation events, 'invites_changed' event without
any real payload is sent to all the realm admins and the user.
The event is handled by reloading the list to view recent changes.
Commit tweaked by shubhamdhama:
* Send an `invite_changed` event when an user accept an invite.
Also, added the test for the same.
* No need to delete the invite list in frontend, current logic
handles the case when the invite data is changed properly.
* Extracted the common logic for sending an event into
`notify_invites_changed`.
This is all the plumbing that makes it possible to enable the
stream_email_notifications setting via the Zulip API. The flag still
doesn't do anything yet, but this is a nice checkpoint along the way
to implementing this feature.
Custom profile field value are stored in different structure compare to
other profile fields in events, so generic way to update fields wasn't
updating custom profile fields in `apply_event` function.
Fix this by adding check for custom fields in `apply_event`.
This also adds the appropriate test_events test to verify this code path.
Fixes part of #9875.
For some reason in my original version I was sending both
content and data to the client for submessage events,
where data === JSON.parse(content). There's no reason
to not just let the client parse it, since the client
already does it for data that comes on the original
message, and since we might eventually have non-JSON
payloads.
The server still continues to validate that the payload
is JSON, and the client will blueslip if the server
regressses and sends bad JSON for some reason.
This should make it easier for us to iterate on a less-dense Zulip.
We create two classes on body, less_dense_mode and more_dense_mode, so
that it's easy as we refactor to separate the two concepts from things
like colors that are independent.
We send add events on upload, update events when sending a message
referencing it, and delete updates on removal.
This should make it possible to do real-time sync for the attachments
UI.
Based in part on work by Aastha Gupta.
We only use this data in a rarely-used settings screen, and it can be
large after years of posting screenshots.
So optimize the performance of / by just loading these data when we
actually visit the page.
This saves about 300ms of runtime for loading the home view for my
user account on chat.zulip.org.
Fixes#8853.
In certain cases, the browser is not able to look up the message.
Include the recipient data for the message in the delete_message event,
so look up of those attributes by the browser isn't required.
This is basically a simple fix, where we consistently set
`flags` to an empty array when we pass it around. The history
here is that we had kind of a nasty bug from setting it to
`None`, which only showed up in the somewhat obscure circumstance
of somebody subscribing to all stream events in our API.
Fixes#7921
Add function in user-groups.py for getting member ids
for a group.
Update view to enforce checks for modifying user-groups.
Only admins and user group members can modify user-groups.
This commit migrates realm emoji to be addressed by their `id` rather
than their name. This fixes a long standing issue which was causing
an error on uploading an emoji with same name as a deactivated realm
emoji.
Fixes: #6977.
Adds realm_bot delete event. On bot ownership change, add event is
sent to the bot_owner(if not admin) and delete event to the
previous bot owner(if not admin). For admin, update event is sent.
This fixes an issue where the user's own avatar was being sent down
the wire as None. We could have fixed it, as in #8265, by adding code
in the webapp and mobile apps to compute medium-size gravatar URLs as
well, but that would be messy, and there's little benefit to that
complexity (saving at most 2 URLs from the payload).
Fixes#8253.
We'll replace this primarily with per-realm quotas (plus the simple
per-file limit of settings.MAX_FILE_UPLOAD_SIZE, 25 MiB by default).
We do want per-user quotas too, but they'll need some more management
apparatus around them so an admin has a practical way to set them
differently for different users. And the error handling in this
existing code is rather confused. Just clear this feature out
entirely for now; then we'll build the per-realm version more cleanly,
and then we can later add back per-realm quotas modelled after that.
The migration to actually remove the field is in a subsequent commit.
Based in part on work by Vishnu Ks (hackerkid).
Adds a check for newline that was present on backend, but missing in the
frontend markdown implementation. Updating messages uses is_me_message flag
received from server instead of its own partial test. Similarly, rendering
previews uses markdown code.
Fixes#6493.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
Previously, we weren't doing a proper left join in
user_groups_in_realm_serialized, resulting in empty user groups being
excluded from the query. We want to leave decisions about excluding
empty user groups to the UI layer, so we include these here.
This fixes a bug where, when a user is unsubscribed from a stream,
they might have unread messages on that stream leak. While it might
seem to be a minor problem, it can cause significant problems for
computing the `unread_msgs` data structures, since it means we need to
add an extra filter for whether the user is still subscribed, either
in the backend or in the UI.
Fixes#7095.
This endpoint will allow us to add/delete emoji reactions whose emoji
got renamed during various emoji infra changes. This was also a
required change for realm emoji migration.
This commit was tweaked significantly by tabbott for greater clarity
(with no changes to the actual logic).
This fixes a regression in ae5ba7f4fd,
where Zulip would 500 if the newly added system bots didn't exist on
the server.
This also fixes a moderate size performance problem where we'd fetch 5
users from memcached or the database in a loop.
These are new:
new-user-bot
emailgateway
Our cross-realm bots are hard coded to have email addresses
in the `zulip.com` domain, and they're not part of ordinary
realms.
These have always been cross-realm, but new enforcement in the
frontend code of all messages having been sent by a known user means
that it's important to add these properly.
This change affects realm_users and realm_non_active_users.
Note that we still send full avatar urls in realm_user/add
events, so apply_events has to do something mildly hacky to
turn the avatar_url to None in that case.
Fixing the event is probably not worth the trouble, as single
urls are not bandwidth hogs; we only need this optimization
for bulk data.
This change affects these values:
* page_params.avatar_url
* page_params.avatar_url_medium
It requires passing the client_gravatar flag through this
codepath:
* home_real
* do_events_register
* fetch_initial_state_data
* avatar_url
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
We mostly introduce these functions (as part of a big
code sweep):
send_stream_message
send_personal_message
send_huddle_message
In two cases, where we want to specifically manipulate
queue ids, we now call check_send_message directly. (The
above three functions deliberately don't support kwargs
to ensure simple code and better type safety.)
In do_send_messages, we only produce one dictionary for
the event queues, instead of different flavors for text
vs. html. This prevents two unnecessary queries to the
database.
It also means we only put one dictionary on the "message"
event queue instead of two, albeit a wider one that has
some values that won't be sent to the actual clients.
This wider dictionary from MessageDict.wide_dict is also
used for the `feedback_messages` queue and service bot
queues. Since the extra fields are possibly useful down
the road, and they'll just be ignored for now, we don't
bother to remove them. Also, those queue processors won't
have access to `content_type`, which they shouldn't need.
Fixes#6947
We make a few things cleaner for populating `realm_users`
in `do_event_register` and `apply_events`:
* We have a `raw_users` intermediate dictionary that
makes event updates O(1) and cleaner to read.
* We extract an `is_me` section for all updates that
apply to the current user.
* For `update` events, we do a more surgical copying
of fields from the event into our dict. This
prevents us from mutating fields in the event,
which was sketchy (at least in test mode). In
particular, this allowed us to remove some ugly
`del` code related to avatars.
* We introduce local vars `was_admin` and `now_admin`.
The cleanup had two test implications:
* We no longer need to normalize `realm_users`, since
`apply_events` now sees `raw_users` instead. Since
`raw_users` is a dict, there is no need to normalize
it, unlike lists with possibly random order.
* We updated the schema for avatar updates to include
the two fields that we used to hackily delete from
an event.
This new test solves the problem that when we
made changes to the page-load codepath in the past,
it's been hard to identify what new code caused
more database queries. Now you can see query
counts broken out by event type.
This requires a small, harmless change to extract
an `always_want` function in `lib/events.py`.
This field would get overwritten with an improper value when
we looped over multiple clients, due to not making full copies
of the message dictionary. This failure would be somewhat
random depending on how clients were ordered in the loop.
The only consumers of this field were the mobile app and the
apply-events-to-unread-counts logic. Both of these will now
use `flags` instead.
The `is_mentioned` flag in message events was buggy. We now
look directly at flags.
We will kill off `is_mentioned` in a subsequent commit.
We also remove some debugging code in the test that was failing
before this fix. The test would only fail when `is_mentioned`
was wrong, which never happened when you ran a single test, and
which would happen randomly when you ran multiple tests.
It's fairly difficult to debug tests that use
EventsRegisterTest.do_test, and when they fail on
Travis, it's particularly challengning. Now we make
the main diff less noisy, and we also include
the events that were applied.
The logic to apply events to page_params['unread_msgs'] was
complicated due to the aggregated data structures that we pass
down to the client.
Now we defer the aggregation logic until after we apply the
events. This leads to some simplifications in that codepath,
as well as some performance enhancements.
The intermediate data structure has sets and dictionaries that
generally are keyed by message_id, so most message-related
updates are O(1) in nature.
Also, by waiting to compute the counts until the end, it's a
bit less messy to try to keep track of increments/decrements.
Instead, we just update the dictionaries and sets during the
event-apply phase.
This change also fixes some corner cases:
* We now respect mutes when updating counts.
* For message updates, instead of bluntly updating
the whole topic bucket, we update individual
message ids.
Unfortunately, this change doesn't seem to address the pesky
test that fails sporadically on Travis, related to mention
updates. It will change the symptom, slightly, though.
We now do push notifications and missed message emails
for offline users who are subscribed to the stream for
a message that has been edited, but we short circuit
the offline-notification logic for any user who presumably
would have already received a notification on the original
message.
This effectively boils down to sending notifications to newly
mentioned users. The motivating use case here is that you
forget to mention somebody in a message, and then you edit
the message to mention the person. If they are offline, they
will now get pushed notifications and missed message emails,
with some minor caveats.
We try to mostly use the same techniques here as the
send-message code path, and we share common code with the
send-message path once we get to the Tornado layer and call
maybe_enqueue_notifications.
The major places where we differ are in a function called
maybe_enqueue_notifications_for_message_update, and the top
of that function short circuits a bunch of cases where we
can mostly assume that the original message had an offline
notification.
We can expect a couple changes in the future:
* Requirements may change here, and it might make sense
to send offline notifications on the update side even
in circumstances where the original message had a
notification.
* We may track more notifications in a DB model, which
may simplify our short-circuit logic.
In the view/action layer, we already had two separate codepaths
for send-message and update-message, but this mostly echoes
what the send-message path does in terms of collecting data
about recipients.
This class encapsulates the mapping of stream ids to
recipient ids, and it is optimized for bulk use and
repeated use (i.e. it remembers values it already fetched).
This particular commit barely improves the performance
of gather_subscriptions_helper, but it sets us up for
further optimizations.
Long term, we may try to denormalize stream_id on to the
Subscriber table or otherwise modify the database so we
don't have to jump through hoops to do this kind of mapping.
This commit will help enable those changes, because we
isolate the mapping to this one new class.
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.
This function optimizes marking streams and topics as read,
by using UserMessage.where_unread(), which uses a partial
index on the "read" flag.
This also simplifies the code path for ordinary message
flag updates.
In order to keep 100% line coverage, I simplified the
logging in update_message_flags, so now all requests
will show the "actually" format.
This is an interim step toward creating dedicated endpoints
for marking streams/topics as reads, so we do error checking
with asserts for flag/operation, so we don't introduce a
temporary translation string.
This is the first part of a larger migration to convert Zulip's
reactions storage to something based on the codepoint, not the emoji
name that the user typed in, so that we don't need to worry about
changes in the names we're using breaking the emoji storage.
We were exiting this function in certain cases before updating
mentions. This bug was always there, but it was flaky in terms
of database setup whether the tests would fail, so now the
relevant test sends three consecutive messages.
We also avoid putting duplicate message ids in mentions.
The "all" option for 'message/flags' was dangerous, as it could
apply to any of our flags. The only flag it made sense for, the
"read" flag, now has a dedicated endpoint.
We are adding a new list of unread message ids grouped by
conversation to the queue registration result. This will allow
clients to show accurate unread badges without needing to load an
unbound number of historic messages.
Jason started this commit, and then Steve Howell finished it.
We only identify conversations using stream_id/user_id info;
we may need a subsequent version that includes things like
stream names and user emails/names for API clients that don't
have data structures to map ids -> attributes.
I pushed a bunch of commits that attempted to introduce
the concept of `client_message_id` into our server, as
part of cleaning up our codepaths related to messages you
sent (both for the locally echoed case and for the host
case).
When we deployed this, we had some strange failures involving
double-echoed messages and issues advancing the pointer that appeared
related to #5779. We didn't get to the bottom of exactly why the PR
caused havoc, but I decided there was a cleaner approach, anyway.
We are deprecating local_id/local_message_id on the Python server.
Instead of the server knowing about the client's implementation of
local id, with the message id = 9999.01 scheme, we just send the
server an opaque id to send back to us.
This commit changes the name from local_id -> client_message_id,
but it doesn't change the actual values passed yet.
The goal for client_key in future commits will be to:
* Have it for all messages, not just locally rendered messages
* Not have it overlap with server-side message ids.
The history behind local_id having numbers like 9999.01 is that
they are actually interim message ids and the numerical value is
used for rendering the message list when we do client-side rendering.
Use bool_change if the user_display setting property_type is bool, so that no additional code needs to be added to test_events for new boolean user display settings.
This system hasn't been in active use for several years, and had some
problems with it's design. So it makes sense to just remove it to declutter
the codebase.
Fixes#5655.
Realm.notifications_stream is not a boolean, Text or integer field, and
thus doesn't fit into the do_set_realm_property framework. Added function
to update it in actions.py. Altered the view, realm.py, to accept
stream-id. Also, notifications stream can be disabled by sending a
negative id.
This makes it possible for Zulip administrators to delete messages.
This is primarily intended for use in deleting early test messages,
but it can solve other problems as well.
Later we'll want to play with the permissions model for this, but for
now, the goal is just to integrate the feature.
Note that it saves the deleted messages for some time using the same
approach as Zulip's message retention policy feature.
Fixes#135.
Previously, all notification preference setting had a dedicated test
and setter. Now, all are handled through a modular function using the
property_types framework.
This fixes most cases where we were assigning a user to
the var email and then calling get_user_profile_by_email with
that var.
(This was fixed mostly with a script.)
The example_user() function is specifically designed for
AARON, hamlet, cordelia, and friends, and it allows a concise
way of using their built-in user profiles. Eventually, the
widespread use of example_user() should help us with refactorings
such as moving the tests users out of the "zulip.com" realm
and deprecating get_user_profile_by_email.
This new feature makes it possible to request a different set of
initial data from the event_types an API client is subscribing to.
Primarily useful for mobile apps, where bandwidth constraints might
mean one wants to subscribe to events for a broader set of data than
is initially fetched, and plan to fetch the current state in future
requests.
- Add aggregated info to real-time updated presence status.
- Update `presence events` test case with adding aggregated
information to presence event.
- Add test case for updating presence status for user which
send state from multiple clients.
Fixes#4282.
This is basically just using the new check_dict_only everywhere, with
a few exceptions:
* New self.check_events_dict automatically adds the id field to avoid
duplicating it ~80 times.
* Set log=False for many of the testing action functions to remove the
timestamp field from their returned event dictionaries, since it's
not needed and is the result of a deprecated log_event function.
Wasn't sure if the subscription_field list in do_test_subscribe_events
could contain optional arguments, so I left the call to check_dict on
along with a TODO.
Fixes: #1370.
This removes individual tests for realm properties and replaces them
with a generic do_set_realm_property_test function to test each
property in the Realm.property_types attribute.
Addresses part of #3854.
This replaces individual tests for realm properties with a generic
do_test_realm_update_api function to test each property in the
Realm.property_types attribute.
Addresses part of #3854.
This commit adds the backend support for a new style of tutorial which
allows for highlighting of multiple areas of the page with hotspots that
disappear when clicked by the user.
- Add message retention period field to organization settings form.
- Add css for retention period field.
- Add convertor to not negative int or to None.
- Add retention period setting processing to back-end.
- Fix tests.
Modified by tabbott to hide the setting, since it doesn't work yet.
The goal of merging this setting code now is to avoid unnecessary
merge conflicts in the future.
Part of #106.
This fixes 2 issues:
* Being added to an invite_only stream did not correctly update the
"streams" key of the initial state.
* Once that's resolved, subscribe_to_stream when called on a
nonexistant stream would both send a "create" event (from
create_stream_if_needed) and an "occupy" event (from
bulk_add_subscriptions).
The second event should just be suppressed in that case, and this
implements that suppression.
We previously didn't apply the default language event change
correctly.
Not super important as a bug, since we require the user to reload the
browser for their changes to take effect, but this will save time if
we ever change that.
zerver/lib/actions: removed do_set_realm_* functions and added
do_set_realm_property, which takes in a realm object and the name and
value of an attribute to update on that realm.
zerver/tests/test_events.py: refactored realm tests with
do_set_realm_property.
Kept the do_set_realm_authentication_methods and
do_set_realm_message_editing functions because their function
signatures are different.
Addresses part of issue #3854.
This adds an organization description field to the Realm model, as well as
an input field to the organization settings template. Added three tests.
Set the max length of the field to 100 characters.
Fixes#3962.
This currently only supports this in emoji reactions, not in actual
emoji in message bodies, but it's a great start for people who want a
text-only view.
Tweaked to update the text by tabbott.
Fixes#3169.
Fix administration page javascript issue of TypeError that occurs
due to undefined variable access in static/js/bot_data.js file.
Reactivating a bot was not updating the state in `bot_data`.
Sending an event on reactivating a bot fixes this issue.
Fixes: #2840
Modify the `bot_list` to hold all the bots owned by an user
irrespective of whether the bot is active or inactive. Also
include the `is_active` field in `active_bot_dict_fields` to
distinguish between inactive and active bots.
Our client code will now receive avatar_url in
page_params.people_list during page load, so it will be
able to use more current urls for old messages (the client
already had some logic for that and was just missing the
data).
We also add avatar_url to the realm_user/add event.
When we change the avatar, we make sure to always send a
realm_user/update event (even for bots).
We also needed to add avatar_version and
avatar_source to our active users cache.