The functions truncate_content, truncate_body and truncate_topic
are only meant to be used on text strings. So change its
parameter types from AnyStr to text_type.
Change choices of UserProfile.avatar_sources and UserProfile.tutorial_status
from str literals to unicode literals. This is done because these fields
are CharFields, which are of type `six.text_type`. So the set of values
which they can take should also be of the type `six.text_type`.
Also fix clashing annotations.
str_utils.py has functions for converting strings from one type to
another. It also has a TypeVar called NonBinaryStr, which is like AnyStr
except that it doesn't allow bytes.
[Substantially revised by tabbott]
This probably still has some bugs in it, but having mostly complete
annotations for models.py will help a lot for the annotations folks
are adding to other files.
In function bulk_add_subscriptions, some variables were named
`stream_name` but their type is Stream, not a string. Rename
those variables to `stream`.
Long ago, there was work on an experimental integration model where
every user in a realm would have administrative control over all bots,
with the goal of simplifying the process of setting up communally
administered bots for smaller teams. While that new model was never
fully implemented (and thus never setup as an option), an error in
that original implementation meant that the data on all bots in a
realm, including their API keys, was sent to the browsers of users via
the `realm_bots` variable in `page_params`. The data wasn't displayed
in the UI for non-admin users, but was available via e.g. the
javascript console.
This commit updates this behavior to only send sensitive bot data like
API keys to the owner of the bot (and realm admins).
We may in the future implement a model simplifying communally
administered integrations, but if we do that, those bots should be
limited in their capabilities (e.g. only able to send webhook
messages).
This bug has been present since Zulip was released as open source.
The old code for this lookup was unnecessarily complicated because we
were working around Guardian, where the `is_realm_admin` check was
extremely expensive.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Previously we needed to use a specified password when activating a
formerly mirror dummy user, in order for that user to be able to
(re)set their password and login. Now that we have our own password
reset form, this is no longer required.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
While I believe this actually produced correct output since users are
always subscribed to streams within their realm, this code was
definitely wrong.
Discovered using the mypy type-checking tool.
Previously:
* It wouldn't raise an exception if the stream didn't exist
* It didn't correctly handle being passed a stream name
that differed in case from the stream name in the database.
Just doing the database query is more readable, and has about the same
performance as before in the case where active user dicts for the
realm are in cache (and is substantially better in the rare case that
this isn't in the cache).
Thanks to @dbiollo for the perf investigation and suggestion!
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
The do_send_missedmessage_events_reply_in_zulip function in the email
mirror didn't support EMAIL_GATEWAY_PATTERN that wasn't of the form
%s@example.com (which resulted in replies to missed message emails failing
to be parsed).
django commit 596564e80808 stores the user id in the session as a
string, which broke our code that extracts the user id and compares
it to the id of a UserProfile object.
(imported from commit 99defd7fea96553550fa19e0b2f3e91a1baac123)
Fixes
[
File "/srv/zulip/zerver/lib/actions.py", line 605, in recipient_for_emails
if not (normalized_emails & admin_realm_admin_emails or normalized_emails & settings.CROSS_REALM_BOT_EMAILS):
TypeError: unsupported operand type(s) for &: 'set' and 'list'
(imported from commit f39a95dad7b3207e9188fc03926cd116061ef3f3)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Truthfully, the actual way to do this is going to be a bit
more involved and also involves changing Realm.NOTIFICATION_STREAM_NAME,
probably on a realm-by-realm basis.
(imported from commit b6a05849d215e07ee6716d116ff5e2c819d5b4be)
This way if two browsers are disagreeing about your active status, the
active one wins. The active browser continues to update your timestamp,
and the idle browser's changes are discarded until the timestamp on your
active status expires.
(imported from commit dc29e013d045c4b72793097f611ba6802c58e57a)
We still don't show this in the frontend, aside from our usual "Not
delivered" message that we also show when you send to a non-existent
user.
Addresses #2349
(imported from commit 2f348b15a4d539987ddbcccbbf40e2be87c1f92d)
In a test run with a hand-constructed query, this sped up the query time from
280ms to 50ms.
(imported from commit 8cbe199ca50a487491d13d6d6ef940ea668c1038)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
Before saving a Message object, call update_calculated_fields()
to set the has_attachment/has_image/has_link fields.
Note that the pre_save hook we added here does not get called
if you call bulk_create, hence the explicit call to
update_calculated_fields() in do_send_messages().
(imported from commit 1d60ae5908ef186aa5ff1e39277dbb2b765e60d4)
A stream is vacant when it has no subscribers and occupied when it has at least
one subscriber.
We have a slightly odd model where stream creation is conflated with
subscription creation. Streams are created by attempting to subscribe to a
stream that doesn't exist. We also hide streams with no subscribers from users
to make it seem like they've gone away. However, we can't actually remove those
streams because we want to preserve history.
This commit moves us towards a separation of these two concepts. By sending
events for stream creation, occupation, vacancy, and deletion, we allow clients
to directly observe the global state of streams rather than indirectly observing
subscription information. A more complete solution would involve adding a view
for explicitly creating streams without subscribing to them.
This commit does not handle the intricacies of invite-only streams. We
currently simply do not send these events for invite-only streams.
(imported from commit 5430e5a5eecefafcdba4f5d4f9aa665556fcc559)
We now allow the list of recipients to be sent as a
comma-delimited string with optional JSON encoding.
(imported from commit e928b037bbd258348eb5b2ecca486d0bb77f593e)
Have the server send down the stream's id for removal
events, and have the client use that id to look up the
stream in its internal data structures. This sets the
stage for eventually just sending the stream id (and not
the stream name) down to clients, once all our clients
are ready to use the stream id.
(imported from commit 922516c98fb79ffad8ae7da0396646663ca54fd0)
Our overall guideline is the type names for events are singular, and the list of
events of that type are plural. 'subscriptions' was not following this guideline
and (potentially as a result) had a bug where it was impossible for clients to explicitly
subscribe to subscription change events properly.
(imported from commit 7b3162141fd673746e0489199966c29ea32ee876)
This change also makes it so that the test_rename_stream()
test exercises the code path. We need to subscribe the user
to the stream in order to generate events.
(imported from commit 77f965efbf5a766eb8de23486e303fa135b2e638)
Instead of having home() set page_params.realm_name directly from
the user_profile object, have fetch_initial_state_data() set it.
This is more consistent with how we treat other data, and it protects
us against a race condition where realm name updates arrive during
the DB fetching.
(imported from commit 545e3bd73f150438126e3f941e9bebc7aa1d0614)
In particular, make the stream history inaccessible and free up the
name to be re-used.
(imported from commit 6063b7a484ed0ba0279a17d2b3e9a92b5ef1f762)
This is a slight behavior change, as we now flush user_profile
caches for bots as well as humans.
(imported from commit 24c72c44d851ee4c66a67a4728cd6c548faeedcd)
Stream name and descriptions updates were being sent to all of the
active users on a realm. They are now only send to users who would have
information about that stream.
(imported from commit 2621ee8029f7356bf44ec493d7b5361bd546a8f5)
This is a lot simpler and eliminates a possible failure mode in the
data transfer path.
(imported from commit 19308d2715bbd12dc9385234f1d9156f91bdfae0)
To mutate the state for removing subscribers, the previous
code was essentially adding in event['subscriptions'] to
state['unsubscribed'], but that was a naive approach, since
the event object only has the name of the subscriber, and not
the full subscription info.
We instead effectively copy records over from state['subscriptions']
to state['unsubscribed'], and we also do surgery on the subscribers
that made me need to add the user_profile parameter to apply_events().
With the code apparently working now, I was able to remove the
match_except() test helper and use a more thorough matcher in
the test on do_remove_subscription().
Part of fixing the "remove" case was cleaning up the "add" case,
since they aren't quite symmetric opposites of each other, although
under this refactoring they now share the new name() helper.
(imported from commit 0deab67d0c7b08b3ad962493efae3762a835fd29)
Because full_name and is_admin changes go through many similar,
generic codepaths, it is almost more work at this point to keep
is_admin out of page_params as it is to just put it in. So
I put it in. This should pave the path for showing admins in
the GUI.
This commit actually starting by my adding a test
that calling apply_events() with the notification you get
when calling do_change_is_admin() updates
state['realm_users'] to be similar to what you would get
out of fetch_initial_state_data(). We didn't have test coverage
there before. Making that test pass forced my hand to
either add is_admin to page_params or to special-case
apply_events() not to update page_params with is_admin. I
chose the former approach.
(imported from commit 1e49d59c66540014284529c29d5007224be6a0c6)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
If a name change event arrived during the call to
fetch_initial_state_data(), we would call apply_events() to
update the data structure that eventually becomes page_params.
Our update code, instead of surgically updating the fields, was
just overwriting the record, which led to is_bot being taken
out of the record when only full_name was in the event.
Apart from fixing the "update" case to do the right thing,
this commit also does a bit of cleanup on the code handling
"realm_user" events to make it more generic in how it does
add/remove/update. If we could standardize our events a bit
more, this could eventually lead to DRYing up some of the
apply_events() code.
(imported from commit 772e2fcd6a5605ccb6e8d1bc499b5f336934cf3c)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
Google Groups won't let you add an email address that has a '+' in it
as a member of a group, so we allow '.' as well.
This commit also fixes a current issue with email mirroring, where it
doesn't work if you had a + in the stream name.
This fixes Trac #2102.
(imported from commit 9a7a5f5d16087f6f74fb5308e170a6f04387599e)
Cross-realm private messages are only used to respond to support
requests. So now cross-realm private messages are only allowed if
exactly two realms are in the private message and one of them is
zulip.com.
(imported from commit f01a2824e214682acb22a6995714a9d1b0d0c66f)
This does result in a few more rabbitmq events to be processed (though
a negligible number compared to what we already do), but it saves a
database query from inside Tornado whenever we occasionally have a
cache miss looking up the UserProfile, which is far more important.
(imported from commit a553a00a3004ba27bfb54ffbc3e9c9b170ebae4d)
When we edit a message, send out UserMessage flags to the recipients.
This sets the stage for making sure that changes related to
user-specific alert words or mentions get sent out to users.
(imported from commit bce1de19acef44b5e106352f261203352ece02b9)
Previously, we were processing in Tornado mirroring dummy users (and
deactived users) as recipients -- resulting in bugs where the missed
message hooks would fire for these nonexistent users.
Our Tornado real-time delivery system only needs the list of active
users both for delivery and for presence information updates.
(imported from commit b81143f106a4d0eefa4b838e7c074b2963259746)
Before this is deployed to prod, we need to manually frob our database
to set the is_mirror_dummy=True bit for all existing mirror users.
(imported from commit 39f1938cef091cf1d7d97307f76b137fe1d92b6c)
In plaintext e-mails these will be simple links.
In HTML e-mails these become a <link> and <img>, which some web mail
clients may inline.
(imported from commit b1242dfd917008a019981eb2224c1c7f5f84739f)
Since we changed the initial subscription data to include
user_profile_ids rather than emails, we need to preserve that when
adding in events generated during the page load.
(imported from commit 4f4071b8ba30e57c6f64c9e7b54c1cc754e8f010)
This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
Unbundle the push notifications from the missed message queue processors
and handlers. This makes notifications more immediate, and sets things up
for better badge count handling, and possibly per-stream filtering.
(imported from commit 11840301751b0bbcb3a99848ff9868d9023b665b)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
ScheduledJobs with type Email displace the usual mandrill codepaths
in the Zulip Enterprise deploys
* Email-specific helper functions will appear in deliver_email.py
* 0058_auto__add_scheduledjob.py
(imported from commit 8db08d8a279600322acfdbed792dc1a676f7a0ab)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
This has a small bug where we don't actually filter the message out of
the home view; fixing that requires adding an index on the "flags"
field of UserMessage.
(imported from commit 492c99d0a8e87b253e577be6564bec12099bd8e9)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
I added filter() statements to do_update_message_flags().
Here is some context:
Steve Howell: Case 1, have AND clause to reduce work for DB.
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1) where (flags & 1) = 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=5.727..5.727 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=0.045..2.751 rows=382 loops=1)
Filter: ((flags & 1::bigint) = 0)
Rows Removed by Filter: 9000
Total runtime: 5.759 ms
(5 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Leo Franchi: Sounds reasonable, but I know way less than zev about DBs so I'll defer to his judgement :)
Steve Howell: Case 2, how the code works now:
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1);
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=362.075..362.075 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=0.008..6.138 rows=9382 loops=1)
Total runtime: 362.105 ms
(3 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Steve Howell: In both trials, we set it up so that only 382 of 9382 rows need to be updated. The first trial runs about 63x as fast. The second trial, if my theory is correct, is doing 24x as many writes as it needs. Both trials are reading all 9382 rows.
Steve Howell: The expense of the update statement seems to be proportional to the number of rows you "update", not the number of rows that you actually change.
Steve Howell: For now I created #1869.
Zev Benjamin: That sounds like a reasonable explanation. The disk IO can be expensive
(imported from commit d9090daee1f81cad76c430de0956f9bd504da075)
Handled by the queue processor for signups. Added a management command
that accomplishes the same task, in case it's needed for manually added users,
or in case we goof and need to remove queued emails for a given user.
This addresses Trac #1807
(imported from commit 6727b82a07fa6a3ea3d827860c9e60fd0602297a)
This will hopefully incentivize people to click one and get back into
the app.
We'll also need this for digest emails.
(imported from commit 57191c3fcca3b12df93a81e4692bb7eb8ccc83b2)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We now bulk-fetch subscription information once from the database
and use it throughout bulk_add_subscriptions in order to avoid
hitting the db O(streams) times.
On my machine this shaved the accounts_register API call from making
66 queries to making 37 queries.
(imported from commit 5dd5ad3f50b2a6edf85b5f1d55ebd697a1c60647)
When we send a message, we send some presence information to Tornado
to help it figure out how to generate emails for idle recipients of
a message. This change limits the presence info to being the
intersection of present users and recipients of the message. It is
just an internal optimization to avoid queueing up unneeded data.
The history behind this feature is that I implemented it a while
back, but I think I made a rebase mistake that sent all the presence
data over the wire, despite having code to filter on recipients.
It was mostly harmless, just leading to some inefficiency which is
now fixed.
(imported from commit 7c8e97705afb299c67b99053909e952fbc823551)
For a 4-person stream, we were hitting the DB 8 times, and 4 of
those queries were to lazily get user.email for the 4 recipients
due to upstream code using only(). I added user_profile__email
to the only() call.
I believe this regression started 9/18, and after pushing this
to prod, we would should look at this graph:
https://stats1.zulip.net/graphs/8274cd84588
(imported from commit 70629cb69fe5955c674ba76482609dfe78e5faaf)
Use stream.num_subscribers() in check_if_a_bot_is_sending_a_message_to_an_empty_stream().
The num_subscribers() function using Django's count() method, which returns
a single row, vs. len() on an iterator of query rows.
(imported from commit 6157fe248945e9288ee71d8cc39fb6dda4e9a247)
Some bots created by us do not have owners. Don't try to send a
message to the nonexistent owner.
(imported from commit ab952eccd7d6c4728e9477a106142214b5c81ca9)
Instead just rely on the 2-minute delay in the management command to
batch conversations.
We've had people report being confused or thinking the feature was
broken when they didn't get e-mails because of our rate-limiting, so
let's see if this is not too overwhelming.
(imported from commit 706ddb07b906b5c2edea1159c04acc2ee6f06e29)
Don't send peer_add notifications to users who are already
getting add notifications, because they will already know
about subscribers.
(imported from commit 726b54ae0e30b71440b17d9c51b026872ea96218)
It only grabs the user_profile_id column now. This leads to a
speedup of about 16x between grabbing large ORM objects vs.
small 1-column dictionaries.
(imported from commit 95150bff3fdcbe250b04f014062224af42a6644f)
Splitting out notify_peers() will give us flexibility for cleaning
up how we notify peers for bulk adds.
(imported from commit e108fa2c432cc1fe54d788c58c82c983e0f2394e)
If you expand subscribers on your settings page, you will now see
a query like this in your postgres logs:
SELECT "zerver_userprofile"."email"
FROM "zerver_subscription" INNER JOIN "zerver_recipient" ON ("zerver_subscription"."recipient_id" = "zerver_recipient"."id") INNER JOIN "zerver_userprofile" ON ("zerver_subscription"."user_profile_id" = "zerver_userprofile"."id") WHERE ("zerver_recipient"."type" = 2 AND "zerver_subscription"."active" = true AND "zerver_recipient"."type_id" = 40 AND "zerver_userprofile"."is_active" = true )
The join's still complicated, but the list of fields is one instead of 40+.
(imported from commit 48de1f888193a4d23fcea52d0b633d134e4a3ff7)
get_subscribers_backend() now calls the new get_subscriber_emails()
function, which just queries the email field:
"zerver_userprofile"."email"
...instead of querying about 40 fields that it never uses.
I was able to verify the query slimming by watching my postgres server log.
Also, you can verify that the ORM does roughly 16x less work using values():
>>> def f(): return [sub.user_profile.email for sub in list(Subscription.objects.all().select_related())]
...
>>> def g(): return [row['user_profile__email'] for row in list(Subscription.objects.all().values('user_profile__email'))]
...
>>> def timeit(func): t = time.time(); func(); return time.time() - t
...
>>> timeit(f)
0.045198917388916016
>>> timeit(g)
0.002752065658569336
(imported from commit a69f690a96d076b323fdfc2f4821b0548bdfac7f)
The get_status_dict_by_realm helper gets called whenever our
realm user_presences cache expires, and it used to query these fields:
"zerver_userpresence"."id", "zerver_userpresence"."user_profile_id", "zerver_userpresence"."client_id", "zerver_userpresence"."timestamp", "zerver_userpresence"."status", "zerver_userprofile"."id", "zerver_userprofile"."password", "zerver_userprofile"."last_login", "zerver_userprofile"."is_superuser", "zerver_userprofile"."email", "zerver_userprofile"."is_staff", "zerver_userprofile"."is_active", "zerver_userprofile"."is_bot", "zerver_userprofile"."date_joined", "zerver_userprofile"."bot_owner_id", "zerver_userprofile"."full_name", "zerver_userprofile"."short_name", "zerver_userprofile"."pointer", "zerver_userprofile"."last_pointer_updater", "zerver_userprofile"."realm_id", "zerver_userprofile"."api_key", "zerver_userprofile"."enable_desktop_notifications", "zerver_userprofile"."enable_sounds", "zerver_userprofile"."enter_sends", "zerver_userprofile"."enable_offline_email_notifications", "zerver_userprofile"."last_reminder", "zerver_userprofile"."rate_limits", "zerver_userprofile"."avatar_source", "zerver_userprofile"."tutorial_status", "zerver_userprofile"."onboarding_steps", "zerver_userprofile"."invites_granted", "zerver_userprofile"."invites_used", "zerver_userprofile"."alert_words", "zerver_userprofile"."muted_topics", "zerver_client"."id", "zerver_client"."name"
Now it queries just the fields it needs:
"zerver_client"."name", "zerver_userpresence"."status", "zerver_userpresence"."timestamp", "zerver_userprofile"."email" FROM "zerver_userpresence"
Also, get_status_dict_by_realm is now namespaced under UserPresence as a static method.
(imported from commit be1266844b6bd28b6c615594796713c026a850a1)
This function gets user presence information, which changes rapidly
and requires a pretty simple query.
(imported from commit f9b9f0f22277335c76eb4371930a4fff2758a240)
The do_send_messages() populates the user_presences data structure
for process_new_message(), so that Tornado code never needs to hit
the database or memcached to get the user presence info.
(imported from commit 194aeaead8fa712297a2ee8aff5aa773b92f1207)
These engagement data will be useful both for making pretty graphs of
how addicted our users are as well as for allowing us to check whether
a new deployment is actually using the product or not.
This measures "number of minutes during which each user had checked
the app within the previous 15 minutes". It should correctly not
count server-initiated reloads.
It's possible that we should use something less aggressive than
mousemove; I'm a little torn on that because you really can check the
app for new messages without doing anything active.
This is somewhat tested but there are a few outstanding issues:
* Mobile apps don't report these data. It should be as easy as having
them send in update_active_status queries with new_user_input=true.
* The semantics of this should be better documented (e.g. the
management script should print out the spec above)x.
(imported from commit ec8b2dc96b180e1951df00490707ae916887178e)
It was getting hard to follow and is going to get more complicated
with a new super user check in a later commit.
(imported from commit 8d5cfa960824519d87ce0f09aab3a120ba9ef357)
An important part of this is updating the various caches that cache
the display_recipient.
(imported from commit 888bf54fd205516cf31a25ba3f4e45ad11bbd4d5)
Before this it was [deleted]. Using parens is consistent with how we put
in (no topic) if you don't specify a topic.
(imported from commit 931c06a1096cf7b0d226336cbe82535abd2e6032)
This helps make our statuses more meaningful and should resolve trac #1534.
As part of this, we lower OFFLINE_THRESHOLD_SECS to 1.1̅6 minutes and
mark the user as idle after 5 minutes.
(imported from commit ee6b1ad203554a84b11e16c4c6195be9df5bcf4f)
This change would allow anyone in the realm to set a topic for a "no topic"
message. As soon as the message topic is set, only the sender can change it again.
(imported from commit 0a91a93b8fd14549965cedc79f45ffd869d82307)
This has the amusing side effect of showing all the Zulip bots in the
administration view because none of them have the is_bot set.
(imported from commit cdec19d2109c092018c1f331aa32f345d1587683)
ALLOW_REGISTER was no longer being used in determining whether you could
register for the app, so I've removed it to avoid additional local-dev /
production issues.
This closes#1613.
(imported from commit c928c6d350602d35f745ae1e60d734e4567885fc)
On Debian systems, this is found in the `python-dns` package.
On OS X and others, install "pydns" using your Python package manager.
(imported from commit 17827d0a1d3d72b12945df5563295a1573bfa1ed)
This was previously causing us to generate a traceback every time we
hit a duplicated zephyr due to CC'ing.
(imported from commit 240e1559655d0166dcd864e84649ab97b87a29ad)
Our API documentation says that we do, and it seems like it could be
useful to clients, so we might as well do it.
(imported from commit c391e4952a09d41df4dc06e3dc6ee094f774822b)
This needs to be deployed to both staging and prod at the same
off-peak time (and the schema migration run).
At the time it is deployed, we need to make a few changes directly in
the database:
(1) UPDATE django_content_type set app_label='zerver' where app_label='zephyr';
(2) UPDATE south_migrationhistory set app_name='zerver' where app_name='zephyr';
(imported from commit eb3fd719571740189514ef0b884738cb30df1320)