We are adding a new list of unread message ids grouped by
conversation to the queue registration result. This will allow
clients to show accurate unread badges without needing to load an
unbound number of historic messages.
Jason started this commit, and then Steve Howell finished it.
We only identify conversations using stream_id/user_id info;
we may need a subsequent version that includes things like
stream names and user emails/names for API clients that don't
have data structures to map ids -> attributes.
In anticipation of have all unread message ids available to the
web app in page_params (via a separate effort), we are simplifying
the /topics endpoint to no longer return unread counts.
Instead we have a list of tiny dictionaries with these fields:
name - name of the topic
max_id - max message id for the topic (aka most recent)
The items in the list are order by most-recent-topic-first.
In most cases, we do have the data for which other user was
responsible for subscribing the target user to new streams.
The main case where we don't is when the user is created and gets the
default streams.
ScheduledJob was written for much more generality than it ended up being
used for. Currently it is used by send_future_email, and nothing
else. Tailoring the model to emails in particular will make it easier to do
things like selectively clear emails when people unsubscribe from particular
email types, or seamlessly handle using the same email on multiple realms.
Both the queue processor and ScheduledJob emails need to sometimes pass a
to_user_id and sometimes pass a to_email, and it's more convenient to just
have one function that they can call that can handle either.
Also removes the now redundant send_email_to_user.
This new setting controls whether or not users are allowed to see the
edit history in a Zulip organization. It controls access through 2
key mechanisms:
* For long-ago edited messages, get_messages removes the edit history
content from messages it sends to clients.
* For newly edited messages, clients are responsible for checking the
setting and not saving the edit history data. Since the webapp was
the only client displaying it before this change, this just required
some changes in message_events.js.
Significantly modified by tabbott to fix some logic bugs and add a
test.
I pushed a bunch of commits that attempted to introduce
the concept of `client_message_id` into our server, as
part of cleaning up our codepaths related to messages you
sent (both for the locally echoed case and for the host
case).
When we deployed this, we had some strange failures involving
double-echoed messages and issues advancing the pointer that appeared
related to #5779. We didn't get to the bottom of exactly why the PR
caused havoc, but I decided there was a cleaner approach, anyway.
We are deprecating local_id/local_message_id on the Python server.
Instead of the server knowing about the client's implementation of
local id, with the message id = 9999.01 scheme, we just send the
server an opaque id to send back to us.
This commit changes the name from local_id -> client_message_id,
but it doesn't change the actual values passed yet.
The goal for client_key in future commits will be to:
* Have it for all messages, not just locally rendered messages
* Not have it overlap with server-side message ids.
The history behind local_id having numbers like 9999.01 is that
they are actually interim message ids and the numerical value is
used for rendering the message list when we do client-side rendering.
This system hasn't been in active use for several years, and had some
problems with it's design. So it makes sense to just remove it to declutter
the codebase.
Fixes#5655.
This prevents users from accidentally sending a confirmation link
specific to their account to their Zulip administrator if they reply
to the invitation, invitation reminder, account confirmation, or new
email confirmation emails.
Instead of removing an emoji from the database, just mark them as
deactivated so that they can't be used further but can be rendered
properly in reactions and messages.
Fixes: #4750.
No change in behavior.
Also makes the first step towards converting all uses of
settings.ZULIP_ADMINISTRATOR and settings.NOREPLY_EMAIL_ADDRESS to
FromAddress.*.
Once everything is converted, it will be easier to ensure that future
development doesn't break backwards compatibility with the old style of
settings emails.
This will allow for customized senders for emails, e.g. 'Zulip Digest' for
digest emails and 'Zulip Missed Messages' for missed message emails.
Also:
* Converts the sender name to always be "Zulip", if the from_email used to
be settings.NOREPLY_EMAIL_ADDRESS or settings.ZULIP_ADMINISTRATOR.
* Changes the default value of settings.NOREPLY_EMAIL_ADDRESS in the
prod_setting_template to no longer have a display name. The only use of
that display name was in the email pathway.
I think it makes sense to wrest the email sending from confirmation, now
that we have a clean email-sending interface in send_email. A few other
reasons:
* send_confirmation is get_link_for_object followed by send_email, but those
two functions have no arguments in common.
* Sending email through confirmation obfuscates the context dict, and is a
relatively complicated piece of the codebase anyone trying to deal with
the email system has to understand.
* The three emails previously being sent through confirmation don't have
that much in common, other than that they happen to have a confirmation
link in them.
The .split('/')[-1] in registration.py is a hack, but a hack used several
places in the codebase, so maybe one day get_link_for_object will also
return the confirmation_key.
Server settings should just be added to the context in build_email, so that
the individual email pathways (and later, the email testing framework)
doesn't have to worry about it.
Realm.notifications_stream is not a boolean, Text or integer field, and
thus doesn't fit into the do_set_realm_property framework. Added function
to update it in actions.py. Altered the view, realm.py, to accept
stream-id. Also, notifications stream can be disabled by sending a
negative id.
When the last user on a private stream is removed, the stream is no
longer possible to administer, and thus should be marked as
deactivated, so that default streams entries are removed and it no
longer appears in the UI as a non-administerable broken stream.
If you deactivated a default stream, we would correctly remove it from
the list of default streams in the organization. However, we did not
call `send_event`, so browsers would still display it as a default
stream until the next reload.
This fixes that issue by calling do_remove_default_stream instead of
doing the database query directly.
This makes it possible for Zulip administrators to delete messages.
This is primarily intended for use in deleting early test messages,
but it can solve other problems as well.
Later we'll want to play with the permissions model for this, but for
now, the goal is just to integrate the feature.
Note that it saves the deleted messages for some time using the same
approach as Zulip's message retention policy feature.
Fixes#135.
Previously, all notification preference setting had a dedicated test
and setter. Now, all are handled through a modular function using the
property_types framework.
We now pre-populate the streams in DEFAULT_NEW_REALM_STREAMS
(social/general/zulip, unless somebody changes settings.py) with
welcome messages. This makes the streams appear to be active
right away, and it also gives the Zulip realm less of a
blank-slate feeling when you create it.
This change only affects the normal web-based create-realm flow.
It doesn't impact the management commands for creating realms
or setting default streams.
This makes the new user experience in an active community like
chat.zulip.org substantially nicer, since the new user will have the
same level of initial messages to populate topics (etc.) as an
existing user who is caught up.
Without this, there was an undue level of fading-for-inactivity in the
default streams.
Relies on the fact that all the email template names now follow the same
pattern.
Note that there was some template_prefix-like computation being done in
send_confirmation (conditioned on obj.realm.is_zephyr_mirror_realm); that
computation is now being done in the callers.
This increase in the list of needed fields carries a small performance
cost, but it should be very small, and this change is needed to
support outgoing webhooks without additional database queries.
Also puts them into a processing queue, though the queue processor
does nothing.
Rewritten by tabbott to avoid unnecessary database queries in
do_send_messages.
This is a better solution to the problem of how _pg_re_escape should
handle the null character. There's really no good reason to have a
null character in a stream name.
The new function takes a full UserProfile object for the sender,
which allows us to avoid O(N) calls when creating the stream to
find the user profile of the notification bot. (The calls were
already cached, so this won't necessarily be a huge performance
win.)
We also don't have to worry about sending a blank subject any more.
The new, more direct interface for prepping internal stream
messages circumvents the bug-prone extract_recipients() method,
which has the pitfall that it will try to parse a stream name
as JSON. It also takes a UserProfile object for the sender, so
it's a bit more type-safe.
- Add file_name field to `RealmEmoji` model and migration.
- Add emoji upload supporting to Upload backends.
- Add uploaded file processing to emoji views.
- Use emoji source url as based for display url.
- Change emoji form for image uploading.
- Fix back-end tests.
- Fix front-end tests.
- Add tests for emoji uploading.
Fixes#1134
This moves the avatar_ fields in page_params to come from
register_ret. Unlike many fields, changing this had a bit of
complexity, because the avatar update events didn't actually contain
some of the details required for moving these into register_ret to
work correctly without races.
We fix that as part of this change.
Modified significantly by tabbott.
The function internal_prep_message is kind of awkward to
call, so I'm moving most of its implementation to
_internal_prep_message() for upcoming refactorings.
- Add aggregated info to real-time updated presence status.
- Update `presence events` test case with adding aggregated
information to presence event.
- Add test case for updating presence status for user which
send state from multiple clients.
Fixes#4282.
The previous logic was that anyone with a link to a file could send it
to other users, but only the owner could make a file realm-public.
This had some confusing corner cases.
The new logic is much simpler:
* Only the file's owner/uploader can include a file in a message for
the first time.
* Anyone with access to read a file can share it with others by
including it in messages they send.
* Once a file has been sent to a public stream, any user in the realm
can access it.
This replaces individual tests for realm properties with a generic
do_test_realm_update_api function to test each property in the
Realm.property_types attribute.
Addresses part of #3854.
Users editing messages or updating message flags are either already
recorded or not interesting from an audit perspective, and so there's
no need to use log_event with them.
This commit adds the backend support for a new style of tutorial which
allows for highlighting of multiple areas of the page with hotspots that
disappear when clicked by the user.
Modify `bot_owner_user_ids()` to return the user_ids of only
admins and bot owners instead of all the current active users.
This was causing a traceback on the frontend.
Fixes: #3391.
- Add message retention period field to organization settings form.
- Add css for retention period field.
- Add convertor to not negative int or to None.
- Add retention period setting processing to back-end.
- Fix tests.
Modified by tabbott to hide the setting, since it doesn't work yet.
The goal of merging this setting code now is to avoid unnecessary
merge conflicts in the future.
Part of #106.
This fixes 2 issues:
* Being added to an invite_only stream did not correctly update the
"streams" key of the initial state.
* Once that's resolved, subscribe_to_stream when called on a
nonexistant stream would both send a "create" event (from
create_stream_if_needed) and an "occupy" event (from
bulk_add_subscriptions).
The second event should just be suppressed in that case, and this
implements that suppression.
This makes it possible for us to do some convenient validation for
developers, checking whether the correct types are passed for each
each realm property.
zerver/lib/actions: removed do_set_realm_* functions and added
do_set_realm_property, which takes in a realm object and the name and
value of an attribute to update on that realm.
zerver/tests/test_events.py: refactored realm tests with
do_set_realm_property.
Kept the do_set_realm_authentication_methods and
do_set_realm_message_editing functions because their function
signatures are different.
Addresses part of issue #3854.
This makes get_stream match get_realm, get_user_profile_by_email,
etc., in interface, and is more convenient for mypy annotations
because `get_stream` now doesn't return an Optional[Stream].
We use the same strategy Zulip already uses for starred messages,
namely, creating a new UserMessage row with the "historical" flag set
(which basically means Zulip can ignore this row for most purposes
that use UserMessage rows). The historical flag is ignored, however,
in determining which users' browsers to notify about new reactions,
and thus the user will get to see the reaction appear when they click
a message (and any reactions other users later add, as well!).
There's still something of a race here, in that if some users react to
a message while the user is looking at the unsubscribed stream but
before the user reacts to that message, those reactions will not be
displayed to that user (so counts will be a bit lower, or something).
This race feels small enough to ignore for now.
Fixes#3345.
This adds an organization description field to the Realm model, as well as
an input field to the organization settings template. Added three tests.
Set the max length of the field to 100 characters.
Fixes#3962.
validate_user_access_to_subscribers_helper never uses
stream_dict['realm__domain']. I imagine it was there originally to do the
is_zephyr_mirror_realm check.
Previously we used the topic "Realm.domain" for new user signups, but topic
"Realm.string_id" for the realm creation. This changes the user signup
messages to be on the same topic thread as the realm creation.
All current calls to do_activate_user just use the default value of
timezone.now(). Having a date_joined other than timezone.now() raises an
interesting RealmAuditLog question (namely, which time should be used),
which we don't have to answer if we remove the argument.
This currently only supports this in emoji reactions, not in actual
emoji in message bodies, but it's a great start for people who want a
text-only view.
Tweaked to update the text by tabbott.
Fixes#3169.
Standardizing the Zulip codebase to use UTC everywhere. Note that unlike
many recent commits in this line, this changes does result in a change in
behavior.
When you pass a naive datetime to the Django ORM, it uses settings.TIME_ZONE
for the time zone. In the development environment, both settings.TIME_ZONE
and datetime.now() use 'America/New_York', so there is no change in behavior
there. (fromtimestamp with no tz argument uses the same timezone as
datetime.now)
We are soon going to change settings.TIME_ZONE to UTC, so need to remove
naive datetimes from queries to the ORM.
Fix administration page javascript issue of TypeError that occurs
due to undefined variable access in static/js/bot_data.js file.
Reactivating a bot was not updating the state in `bot_data`.
Sending an event on reactivating a bot fixes this issue.
Fixes: #2840
Change `from django.utils.timezone import now` to
`from django.utils import timezone`.
This is both because now() is ambiguous (could be datetime.datetime.now),
and more importantly to make it easier to write a lint rule against
datetime.datetime.now().
Modify the `bot_list` to hold all the bots owned by an user
irrespective of whether the bot is active or inactive. Also
include the `is_active` field in `active_bot_dict_fields` to
distinguish between inactive and active bots.
This adds to Zulip support for a user changing their own email
address.
It's backed by a huge amount of work by Steve Howell on making email
changes actually work from a UI perspective.
Fixes#734.
Our client code will now receive avatar_url in
page_params.people_list during page load, so it will be
able to use more current urls for old messages (the client
already had some logic for that and was just missing the
data).
We also add avatar_url to the realm_user/add event.
When we change the avatar, we make sure to always send a
realm_user/update event (even for bots).
We also needed to add avatar_version and
avatar_source to our active users cache.
This makes life a lot easier for people inviting users to a new Zulip
organization, since they can give some form of context now.
Modified by tabbott to clean up CSS, backend code flow, and improve
the formatting of the emails.
Fixes: #1409.
There's a new option, `include_subscribers`, that controls whether the
API sends down subscriber data for the various streams you are
subscribed to.
This has significant performance savings for large realms with naive
clients, and saves a bunch of bandwidth as well.
This is important for, in the future, being able to display who edited
the topic of a message if that wasn't the person who originally sent
the message.
This is a fairly risky, invasive change that speeds up
stream deactivation by no longer sending subscription/remove
events for individual subscribers to all of the clients who
care about a stream. Instead, we let the client handle the
stream deactivation on a coarser level.
The back end changes here are pretty straightforward.
On the front end we handle stream deactivations by removing the
stream (as needed) from the streams sidebar and/or the stream
settings page. We also remove the stream from the internal data
structures.
There may be some edge cases where live updates don't handle
everything, such as if you are about to compose a message to a
stream that has been deactivated. These should be rare, as admins
generally deactivate streams that have been dormant, and they
should be recoverable either by getting proper error handling when
you try to send to the stream or via reload.
This fix prevents stream deactivation from being basically
un-usable for medium to large sites. Instead of calling
bulk_remove_subscriptions one at a time for every individual
member of the realm, we call it once for all the users that
care about the stream. This change makes a huge difference, but
the feature is still a bit clunky, and we should only temporarily
revert to this fix if future, more-invasive fixes have flaws.
Fixes#3631.
In some cases here we simplify things by calling avatar_url()
instead of get_avatar_url(), when we have a user_profile record
handy. For other cases we pass in an extra avatar_version
parameter to get_avatar_url(), including from avatar_url().
We have a field called user_profile.avatar_version that will
track avatar versions and be used tactically in avatar urls
to get browsers to refresh their caches (in future commits).
This commit bumps the avatar version when we update avatars.
We do this in do_change_avatar_fields(), which was
do_change_avatar_source() before this change.
Adarsh did the initial work here, and Steve Howell (showell) also
made changes.
I dug into why we never did this before, and it turns out we did, but
using `$.trim()` (which removes leading whitespace as well!). When
removing the `$.trim()` usage.
Fixes#3294.
This commit adds html versions of the invite and signup mails and renames
the existing .txt files to the preferred file extensions '.subject', '.html'
and '.txt'. The html versions of the mails are being sent along with the
text-only versions by the 'send_confirmation' function.
This fixes#3134.
This moves do_events_register, fetch_initial_state_data and friends to
a new file.
Modified significantly by tabbott for correctness and to remove unused
imports.
Fixes#3635.
Having `restricted_to_domain` set to True if there are no more aliases
left means the user is either confused or forgot to set it to False. It
should be set to False automatically when the last alias is deleted.
This fixes a regression introduced by our migration to track
subscribers for all public streams, where now users who are added to
an invite-only stream were receiving a mark_subscribed event
for a stream their browser didn't know existed, causing an exception.
To fix this, we now send a stream create event to the browser just
before the user receives the notification that it was added to the
invite-only stream.
This old helper has for years been used only by populate_db, and got
buggy (as of a recent refactoring). So we just call do_send_messages
directly instead.
Fixes the provisioning error we currently get in Travis CI.
This is a pretty minor change, but it makes it clear that we
have user_id in all the relevant states/events, so we might as
well use that for the check, since email is mutable and
slightly more difficult to reason about.
This changes bugdown to use the realm passed in by the caller (if any)
for rendering, fixing a problem where bots such as the notification
bot would have their messages rendering using the admin realm's
settings, not the settings of the realm their messages are being sent
into.
Also adds a test for the notification bot case.
Fixes#3215.
A lot of care has been taken to ensure we're using the realm that the
message is being sent into, not the realm of the sender, to correctly
handle the logic for cross-realm bot users such as the notifications
bot.
In order to correctly handle messages sent by cross-realm bots, we
need to specify the realm that the messages are being sent into in the
send message code path. The commit and its successors convert that
code path to include the realm the message is being sent to explicitly.
Finishes the refactoring started in c1bbd8d. The goal of the refactoring is
to change the argument to get_realm from a Realm.domain to a
Realm.string_id. The steps were
* Add a new function, get_realm_by_string_id.
* Change all calls to get_realm to use get_realm_by_string_id instead.
* Remove get_realm.
* (This commit) Rename get_realm_by_string_id to get_realm.
Part of a larger migration to remove the Realm.domain field entirely.
Removes the dependence on postmonkey, which is a wrapper around
MailChimp API v1.3. MailChimp recommends using their v3.0 API directly,
rather than through a wrapper library.
While this may not have created a clear user-facing bug, it seems less
confusing for do_invite_users to only create PreregistrationUser
objects for users who actually received an email invitation.
This adds support for only allowing normal users with account age
equal or greater than a "waiting period" threshold to create streams;
this is useful for open organizations that want new members to
understand the community before creating streams.
If create_stream_by_admins_only setting is set to True, only admin users
were able to create streams. Now normal users with account age greater
or equal than waiting period threshold can also create streams.
Account age is defined as number of days passed since the user had
created his account.
Fixes: #2308.
Tweaked by tabbott to clean up the actual can_create_streams logic and
the tests.
This includes making the default stream description setting into a
dict. That is an API change; we'll discuss it in the changelog but it
seems small enough to be OK.
With some small tweaks by tabbott to remove unnecessary backwards
compatibility code for the settings.
Fixes#2427.
This change adds support for displaying inline open graph previews for
links posted into Zulip.
It is designed to interact correctly with message editing.
This adds the new settings.INLINE_URL_EMBED_PREVIEW setting to control
whether this feature is enabled.
By default, this setting is currently disabled, so that we can burn it
in for a bit before it impacts users more broadly.
Eventually, we may want to make this manageable via a (set of?)
per-realm settings. E.g. I can imagine a realm wanting to be able to
enable/disable it for certain URLs.
This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
Previously, if a new message arrived between when a user is subscribed
to the default streams and when the user's initial messages are
queried, we would try to create two UserMessage rows for the same
Message, resulting in an IntegrityError crash. We fix this and add a
test for that race condition.
send_event() expects a list of user ids (ints) except for the special case
of messages. This commit:
1. Fixes this in the call to send_event() in do_send_typing_notification()
2. Renames the variables in do_send_typing_notification() to better reflect
their content (for example, recipient_ids instead of recipients).
3. Renames the id field in the dicts sent in the typing event body (sender,
recipients) to user_id.
4. Adds assertions to the tests to verify that the tornado event user ids
are the same as the recipients in the event body.
5. Adds assertions to the tests to verify that the tornado event user
ids and the recipient user ids (in the event body) are the same as the
expected user ids (obtained from the emails using
get_user_profile_by_email)
6. Changes all assertTrues to assertEquals in the tests
This fixes#2151.
If a stream is public, we now send notifications to all realm users
if the name or description of the stream changes. For private
streams, the behavior remains the same.
We do this by introducing a method called
can_access_stream_user_ids().
(showell helped with this fix)
Fixes#2195
Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
In preparation for a change to do_create_realm where we will use the
database default for restricted_to_domain rather than computing it within
do_create_realm, and due to which do_create_realm will no longer know
whether we are creating an open realm or not.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
We now simply exclude all cross-realm bots from the set of emails
under consideration, and then if the remaining emails are all in
the same realm, we're good.
This fix changes two behaviors:
* You can no longer send a PM to an ordinary user in another realm
by piggy-backing a cross-realm bot on to the message. (This was
basically a bug, but it would never manifest under current
configurations.)
* You will be able to send PMs to multiple cross-realm bots at once.
(This was an arbitrary restriction. We don't really care about this
scenario much yet, and it fell out of the new implementation.)
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
When we added data on never_subscribed streams to what
populate_subscribers is called on, we failed to add the corresponding
data on subscribers to email_dict, the mapping of user IDs to emails
for the subscribers.
Because in the Zephyr world, stream names can be a secret, and also
Zephyr mirroring tends to involve many thousands of streams, we
shouldn't send this data.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
There are two reasons for this change. First, we want to be
consistent with notify_subscriptions_added(), which doesn't
handle "peer" events. Second, we want to fix this code in a
subsequent commit not to do one user at a time, which is
inefficient.
merge_vars is the user meta data we store in mailchimp. This commit changes
what we store from realm.domain to realm.id, since realm.domain is being
deprecated, and changes the OPTIN_TIME to a nicer format.
This changes `from django.utils.importlib import import_module` to
`from importlib import import_module`, as `django.utils.importlib` will
be removed in django version 1.9.
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
I move these three functions to lib/cache.py:
to_dict_cache_key_id
to_dict_cache_key
flush_message
This will prepare us for a more significant refactoring that
eventually breaks down some circular dependencies with
Message and bugdown.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
We can now rely on UserProfile.last_reminder being time zone
aware, or even if it isn't, it's a self-correcting problem the
first time a reminder is sent. (It's a non-problem to be off
by a few timezones if somebody still has an old value there, because
they will still be outside the 1-minute nag window even with the
timezone disparity.)
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
Most directly useful for the migration to zulipchat.com.
Creates a new field in UserProfile to store the tos_version, as well as two
new settings TOS_VERSION and FIRST_TIME_TOS_TEMPLATE. We check for a version
mismatch between what the user has signed and the current
settings.TOS_VERSION whenever the user hits the home page, and redirect them
if needed.
Note that accounts_accept_terms.html and
zerver.views.accounts_accept_terms were unused before this commit
(they date from c327446537)
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
This function is only called in cases where user_profile isn't None,
and the code reads better if we just check that first rather than
checking it on every line that accesses user_profile.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
Originally this cache was used to transmit data from Django to Tornado
(and also for general message caching purposes), but now nothing
actually reads from this cache, so we can eliminate it.
The functions truncate_content, truncate_body and truncate_topic
are only meant to be used on text strings. So change its
parameter types from AnyStr to text_type.
Change choices of UserProfile.avatar_sources and UserProfile.tutorial_status
from str literals to unicode literals. This is done because these fields
are CharFields, which are of type `six.text_type`. So the set of values
which they can take should also be of the type `six.text_type`.
Also fix clashing annotations.
str_utils.py has functions for converting strings from one type to
another. It also has a TypeVar called NonBinaryStr, which is like AnyStr
except that it doesn't allow bytes.
[Substantially revised by tabbott]
This probably still has some bugs in it, but having mostly complete
annotations for models.py will help a lot for the annotations folks
are adding to other files.
In function bulk_add_subscriptions, some variables were named
`stream_name` but their type is Stream, not a string. Rename
those variables to `stream`.
Long ago, there was work on an experimental integration model where
every user in a realm would have administrative control over all bots,
with the goal of simplifying the process of setting up communally
administered bots for smaller teams. While that new model was never
fully implemented (and thus never setup as an option), an error in
that original implementation meant that the data on all bots in a
realm, including their API keys, was sent to the browsers of users via
the `realm_bots` variable in `page_params`. The data wasn't displayed
in the UI for non-admin users, but was available via e.g. the
javascript console.
This commit updates this behavior to only send sensitive bot data like
API keys to the owner of the bot (and realm admins).
We may in the future implement a model simplifying communally
administered integrations, but if we do that, those bots should be
limited in their capabilities (e.g. only able to send webhook
messages).
This bug has been present since Zulip was released as open source.
The old code for this lookup was unnecessarily complicated because we
were working around Guardian, where the `is_realm_admin` check was
extremely expensive.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Previously we needed to use a specified password when activating a
formerly mirror dummy user, in order for that user to be able to
(re)set their password and login. Now that we have our own password
reset form, this is no longer required.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
While I believe this actually produced correct output since users are
always subscribed to streams within their realm, this code was
definitely wrong.
Discovered using the mypy type-checking tool.
Previously:
* It wouldn't raise an exception if the stream didn't exist
* It didn't correctly handle being passed a stream name
that differed in case from the stream name in the database.
Just doing the database query is more readable, and has about the same
performance as before in the case where active user dicts for the
realm are in cache (and is substantially better in the rare case that
this isn't in the cache).
Thanks to @dbiollo for the perf investigation and suggestion!
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
The do_send_missedmessage_events_reply_in_zulip function in the email
mirror didn't support EMAIL_GATEWAY_PATTERN that wasn't of the form
%s@example.com (which resulted in replies to missed message emails failing
to be parsed).
django commit 596564e80808 stores the user id in the session as a
string, which broke our code that extracts the user id and compares
it to the id of a UserProfile object.
(imported from commit 99defd7fea96553550fa19e0b2f3e91a1baac123)
Fixes
[
File "/srv/zulip/zerver/lib/actions.py", line 605, in recipient_for_emails
if not (normalized_emails & admin_realm_admin_emails or normalized_emails & settings.CROSS_REALM_BOT_EMAILS):
TypeError: unsupported operand type(s) for &: 'set' and 'list'
(imported from commit f39a95dad7b3207e9188fc03926cd116061ef3f3)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Truthfully, the actual way to do this is going to be a bit
more involved and also involves changing Realm.NOTIFICATION_STREAM_NAME,
probably on a realm-by-realm basis.
(imported from commit b6a05849d215e07ee6716d116ff5e2c819d5b4be)
This way if two browsers are disagreeing about your active status, the
active one wins. The active browser continues to update your timestamp,
and the idle browser's changes are discarded until the timestamp on your
active status expires.
(imported from commit dc29e013d045c4b72793097f611ba6802c58e57a)
We still don't show this in the frontend, aside from our usual "Not
delivered" message that we also show when you send to a non-existent
user.
Addresses #2349
(imported from commit 2f348b15a4d539987ddbcccbbf40e2be87c1f92d)
In a test run with a hand-constructed query, this sped up the query time from
280ms to 50ms.
(imported from commit 8cbe199ca50a487491d13d6d6ef940ea668c1038)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
Before saving a Message object, call update_calculated_fields()
to set the has_attachment/has_image/has_link fields.
Note that the pre_save hook we added here does not get called
if you call bulk_create, hence the explicit call to
update_calculated_fields() in do_send_messages().
(imported from commit 1d60ae5908ef186aa5ff1e39277dbb2b765e60d4)
A stream is vacant when it has no subscribers and occupied when it has at least
one subscriber.
We have a slightly odd model where stream creation is conflated with
subscription creation. Streams are created by attempting to subscribe to a
stream that doesn't exist. We also hide streams with no subscribers from users
to make it seem like they've gone away. However, we can't actually remove those
streams because we want to preserve history.
This commit moves us towards a separation of these two concepts. By sending
events for stream creation, occupation, vacancy, and deletion, we allow clients
to directly observe the global state of streams rather than indirectly observing
subscription information. A more complete solution would involve adding a view
for explicitly creating streams without subscribing to them.
This commit does not handle the intricacies of invite-only streams. We
currently simply do not send these events for invite-only streams.
(imported from commit 5430e5a5eecefafcdba4f5d4f9aa665556fcc559)
We now allow the list of recipients to be sent as a
comma-delimited string with optional JSON encoding.
(imported from commit e928b037bbd258348eb5b2ecca486d0bb77f593e)
Have the server send down the stream's id for removal
events, and have the client use that id to look up the
stream in its internal data structures. This sets the
stage for eventually just sending the stream id (and not
the stream name) down to clients, once all our clients
are ready to use the stream id.
(imported from commit 922516c98fb79ffad8ae7da0396646663ca54fd0)
Our overall guideline is the type names for events are singular, and the list of
events of that type are plural. 'subscriptions' was not following this guideline
and (potentially as a result) had a bug where it was impossible for clients to explicitly
subscribe to subscription change events properly.
(imported from commit 7b3162141fd673746e0489199966c29ea32ee876)
This change also makes it so that the test_rename_stream()
test exercises the code path. We need to subscribe the user
to the stream in order to generate events.
(imported from commit 77f965efbf5a766eb8de23486e303fa135b2e638)
Instead of having home() set page_params.realm_name directly from
the user_profile object, have fetch_initial_state_data() set it.
This is more consistent with how we treat other data, and it protects
us against a race condition where realm name updates arrive during
the DB fetching.
(imported from commit 545e3bd73f150438126e3f941e9bebc7aa1d0614)
In particular, make the stream history inaccessible and free up the
name to be re-used.
(imported from commit 6063b7a484ed0ba0279a17d2b3e9a92b5ef1f762)
This is a slight behavior change, as we now flush user_profile
caches for bots as well as humans.
(imported from commit 24c72c44d851ee4c66a67a4728cd6c548faeedcd)
Stream name and descriptions updates were being sent to all of the
active users on a realm. They are now only send to users who would have
information about that stream.
(imported from commit 2621ee8029f7356bf44ec493d7b5361bd546a8f5)
This is a lot simpler and eliminates a possible failure mode in the
data transfer path.
(imported from commit 19308d2715bbd12dc9385234f1d9156f91bdfae0)
To mutate the state for removing subscribers, the previous
code was essentially adding in event['subscriptions'] to
state['unsubscribed'], but that was a naive approach, since
the event object only has the name of the subscriber, and not
the full subscription info.
We instead effectively copy records over from state['subscriptions']
to state['unsubscribed'], and we also do surgery on the subscribers
that made me need to add the user_profile parameter to apply_events().
With the code apparently working now, I was able to remove the
match_except() test helper and use a more thorough matcher in
the test on do_remove_subscription().
Part of fixing the "remove" case was cleaning up the "add" case,
since they aren't quite symmetric opposites of each other, although
under this refactoring they now share the new name() helper.
(imported from commit 0deab67d0c7b08b3ad962493efae3762a835fd29)
Because full_name and is_admin changes go through many similar,
generic codepaths, it is almost more work at this point to keep
is_admin out of page_params as it is to just put it in. So
I put it in. This should pave the path for showing admins in
the GUI.
This commit actually starting by my adding a test
that calling apply_events() with the notification you get
when calling do_change_is_admin() updates
state['realm_users'] to be similar to what you would get
out of fetch_initial_state_data(). We didn't have test coverage
there before. Making that test pass forced my hand to
either add is_admin to page_params or to special-case
apply_events() not to update page_params with is_admin. I
chose the former approach.
(imported from commit 1e49d59c66540014284529c29d5007224be6a0c6)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
If a name change event arrived during the call to
fetch_initial_state_data(), we would call apply_events() to
update the data structure that eventually becomes page_params.
Our update code, instead of surgically updating the fields, was
just overwriting the record, which led to is_bot being taken
out of the record when only full_name was in the event.
Apart from fixing the "update" case to do the right thing,
this commit also does a bit of cleanup on the code handling
"realm_user" events to make it more generic in how it does
add/remove/update. If we could standardize our events a bit
more, this could eventually lead to DRYing up some of the
apply_events() code.
(imported from commit 772e2fcd6a5605ccb6e8d1bc499b5f336934cf3c)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
Google Groups won't let you add an email address that has a '+' in it
as a member of a group, so we allow '.' as well.
This commit also fixes a current issue with email mirroring, where it
doesn't work if you had a + in the stream name.
This fixes Trac #2102.
(imported from commit 9a7a5f5d16087f6f74fb5308e170a6f04387599e)
Cross-realm private messages are only used to respond to support
requests. So now cross-realm private messages are only allowed if
exactly two realms are in the private message and one of them is
zulip.com.
(imported from commit f01a2824e214682acb22a6995714a9d1b0d0c66f)
This does result in a few more rabbitmq events to be processed (though
a negligible number compared to what we already do), but it saves a
database query from inside Tornado whenever we occasionally have a
cache miss looking up the UserProfile, which is far more important.
(imported from commit a553a00a3004ba27bfb54ffbc3e9c9b170ebae4d)
When we edit a message, send out UserMessage flags to the recipients.
This sets the stage for making sure that changes related to
user-specific alert words or mentions get sent out to users.
(imported from commit bce1de19acef44b5e106352f261203352ece02b9)
Previously, we were processing in Tornado mirroring dummy users (and
deactived users) as recipients -- resulting in bugs where the missed
message hooks would fire for these nonexistent users.
Our Tornado real-time delivery system only needs the list of active
users both for delivery and for presence information updates.
(imported from commit b81143f106a4d0eefa4b838e7c074b2963259746)
Before this is deployed to prod, we need to manually frob our database
to set the is_mirror_dummy=True bit for all existing mirror users.
(imported from commit 39f1938cef091cf1d7d97307f76b137fe1d92b6c)
In plaintext e-mails these will be simple links.
In HTML e-mails these become a <link> and <img>, which some web mail
clients may inline.
(imported from commit b1242dfd917008a019981eb2224c1c7f5f84739f)
Since we changed the initial subscription data to include
user_profile_ids rather than emails, we need to preserve that when
adding in events generated during the page load.
(imported from commit 4f4071b8ba30e57c6f64c9e7b54c1cc754e8f010)
This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
Unbundle the push notifications from the missed message queue processors
and handlers. This makes notifications more immediate, and sets things up
for better badge count handling, and possibly per-stream filtering.
(imported from commit 11840301751b0bbcb3a99848ff9868d9023b665b)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
ScheduledJobs with type Email displace the usual mandrill codepaths
in the Zulip Enterprise deploys
* Email-specific helper functions will appear in deliver_email.py
* 0058_auto__add_scheduledjob.py
(imported from commit 8db08d8a279600322acfdbed792dc1a676f7a0ab)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
This has a small bug where we don't actually filter the message out of
the home view; fixing that requires adding an index on the "flags"
field of UserMessage.
(imported from commit 492c99d0a8e87b253e577be6564bec12099bd8e9)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
I added filter() statements to do_update_message_flags().
Here is some context:
Steve Howell: Case 1, have AND clause to reduce work for DB.
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1) where (flags & 1) = 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=5.727..5.727 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=0.045..2.751 rows=382 loops=1)
Filter: ((flags & 1::bigint) = 0)
Rows Removed by Filter: 9000
Total runtime: 5.759 ms
(5 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Leo Franchi: Sounds reasonable, but I know way less than zev about DBs so I'll defer to his judgement :)
Steve Howell: Case 2, how the code works now:
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1);
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=362.075..362.075 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=0.008..6.138 rows=9382 loops=1)
Total runtime: 362.105 ms
(3 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Steve Howell: In both trials, we set it up so that only 382 of 9382 rows need to be updated. The first trial runs about 63x as fast. The second trial, if my theory is correct, is doing 24x as many writes as it needs. Both trials are reading all 9382 rows.
Steve Howell: The expense of the update statement seems to be proportional to the number of rows you "update", not the number of rows that you actually change.
Steve Howell: For now I created #1869.
Zev Benjamin: That sounds like a reasonable explanation. The disk IO can be expensive
(imported from commit d9090daee1f81cad76c430de0956f9bd504da075)
Handled by the queue processor for signups. Added a management command
that accomplishes the same task, in case it's needed for manually added users,
or in case we goof and need to remove queued emails for a given user.
This addresses Trac #1807
(imported from commit 6727b82a07fa6a3ea3d827860c9e60fd0602297a)
This will hopefully incentivize people to click one and get back into
the app.
We'll also need this for digest emails.
(imported from commit 57191c3fcca3b12df93a81e4692bb7eb8ccc83b2)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We now bulk-fetch subscription information once from the database
and use it throughout bulk_add_subscriptions in order to avoid
hitting the db O(streams) times.
On my machine this shaved the accounts_register API call from making
66 queries to making 37 queries.
(imported from commit 5dd5ad3f50b2a6edf85b5f1d55ebd697a1c60647)
When we send a message, we send some presence information to Tornado
to help it figure out how to generate emails for idle recipients of
a message. This change limits the presence info to being the
intersection of present users and recipients of the message. It is
just an internal optimization to avoid queueing up unneeded data.
The history behind this feature is that I implemented it a while
back, but I think I made a rebase mistake that sent all the presence
data over the wire, despite having code to filter on recipients.
It was mostly harmless, just leading to some inefficiency which is
now fixed.
(imported from commit 7c8e97705afb299c67b99053909e952fbc823551)
For a 4-person stream, we were hitting the DB 8 times, and 4 of
those queries were to lazily get user.email for the 4 recipients
due to upstream code using only(). I added user_profile__email
to the only() call.
I believe this regression started 9/18, and after pushing this
to prod, we would should look at this graph:
https://stats1.zulip.net/graphs/8274cd84588
(imported from commit 70629cb69fe5955c674ba76482609dfe78e5faaf)
Use stream.num_subscribers() in check_if_a_bot_is_sending_a_message_to_an_empty_stream().
The num_subscribers() function using Django's count() method, which returns
a single row, vs. len() on an iterator of query rows.
(imported from commit 6157fe248945e9288ee71d8cc39fb6dda4e9a247)