As far as I can tell, we don't actually use this value, but better to
have it be clear.
(imported from commit 3655b87f28b0554ee3db0acb2c0d59543dd093a1)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
This prevents us from failing if the first or last name is unset.
We fall back to None, which will allow the user to set their name even
if real names are restricted, which is probably better than forcing them
to have no name.
Closes trac #2118.
(imported from commit 1ff8a55022f3a3baf67575b593a679e21c0f3194)
And in the meanwhile, comment what's going on so that we don't break
this in refactoring again later.
(imported from commit a3119cd1eab3d54cb1883f2c8cad0d147cb04ba7)
Currently all of our realms we intend to create are created manually,
and regardless do_create_realm is the correct way to create a realm.
(imported from commit 42280aff461aa17ffee22ab1c7b7f43757648eec)
I'd also like to add a database table to actually store the values
that we get out of this and our send message requests for future
inspection, but for now, grepping logs+statsd is good enough.
(imported from commit 99ef179651850217fe6e82c5e928d122ca91101e)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
Until we can add a banner to help users subscribe, it may be confusing to
narrow to a stream where you are not subscribed.
Partial revert of 390bdef
(imported from commit ea75fc59b979589b975465a3fecffea0f014fcf6)
It's a little weird that these still open in a new tab, but it might
be best to keep them consistent with all other links?
This is a first pass on Trac #1927.
(imported from commit 390bdef790a83af4240ad5f5a82e572ef5824756)
If authoritative data is available from say the LDAP database, we now
ignore the POSTed user name, and don't offer it as a form field.
We fall back to giving the user a text field if they aren't in LDAP.
If users do not have any form fields to fill out, we simply bring them
to the app without the registration page, logging them in using a dummy
backend.
(imported from commit 6bee87430ba46ff753ea3408251e8a80c45c713f)
The latter doesn't depend on the former; we can still fill in your full
name even if you didn't authenticate via LDAP.
This commit requires django_auth_ldap to be installed. On Debian
systems, you can do so via APT:
sudo apt-get install python-django-auth-ldap
On OS X, use your favourite package manager. For pip, I believe this
will work:
pip install django_auth_ldap
django_auth_ldap depends on the "ldap" Python package, which should be
installed automatically on your system.
(imported from commit 43967754285990b06b5a920abe95b8bce44e2053)
This should address user reports of huge bankruptcy counts even when
they are relatively caught up. The root issue is that we sometimes
don't mark messages as read for some reason.
(imported from commit 8799305a8665f9ee239575e6e95f603f89c1d427)
UserProfile.show_admin was intended to be a check for users that have
administrative rights in other realms, which we've harmlessly but
erroneously been using to check if they are an admin in their realm.
Use the more straightforward check instead, with a more intuitive
name.
(imported from commit d81050c7dbbb19e59c5e31750be303a4630e1456)
This fixes a problem where the desktop app would attempt to load
https://zulip.akam.ai:8888/ after authenticating the user, which fails
with CSS issues.
We should probably, separately, change our Django-under-apache to only
serve the one URL that it needs and redirect the rest back to
Django-under-nginx.
(imported from commit 3e3251863618269790f61b371e88af57b6cfb272)
Errors are sent to a queue processor that posts them to staging,
just like the feedback bot.
(imported from commit 4a8d099672a1b3e48a8bc94148d8b53db73d2c64)
The /avatar/<email> URL redirects to the appropriate
avatar URL for an email, whether it's hosted by Gravatar
or Zulip. (This will work even for external users, as
it falls through to Gravatar.)
(imported from commit 7e6f226659cb2e5a7f6426da0be8aa9bae9cff14)
The Freshdesk API is bonkers, but we do the best we can with it to
support notifications on ticket creation and ticket updates.
(imported from commit 2023622b274ef83f4e1544d0df286fe2e68581b3)
This is the amount of time between when it is sent, and when it is
rendered into the user's home view.
(imported from commit 468c28e77ba16c7256c359e90ab5aacf9d497585)
The main Activity page counts users as active if they have either
sent a message or updated a pointer. In the unlikely event that
somebody sent a message but never updated their pointer, we were
undercounting them, if they went through send_messages_backend.
(imported from commit 5f112be87a239980c38a18c13f9cd68e90d2e905)
This should help with determining the prevalence of slow sends as
experienced by users.
(imported from commit f00797679315c928af3c87ad8fdf0112f1dfa900)
The "desktop" counts aggregate all desktop clients, but on the
Clients tab, we are only interested in specific versions.
(imported from commit eea2d8da584a6fa32fa1f3a2bae71ef5daaba738)
This report will eventually replace the per-realm report that is
now accessible through /activity. In order not to disrupt Waseem,
I'm leaving the old reports around until we've polished the new
ones.
The old report does 24 different queries to get per-realm user data.
The new approach gets all the data at once, and it slices and dices
the data in Python to accomodate our slightly quirky data model.
On localhost, this is a typical query:
LOG: duration: 5.668 ms statement: SELECT "zerver_useractivity"."id", "zerver_useractivity"."user_profile_id", "zerver_useractivity"."client_id", "zerver_useractivity"."query", "zerver_useractivity"."count", "zerver_useractivity"."last_visit", "zerver_userprofile"."id", "zerver_userprofile"."email", "zerver_client"."id", "zerver_client"."name" FROM "zerver_useractivity" INNER JOIN "zerver_userprofile" ON ("zerver_useractivity"."user_profile_id" = "zerver_userprofile"."id") INNER JOIN "zerver_realm" ON ("zerver_userprofile"."realm_id" = "zerver_realm"."id") INNER JOIN "zerver_client" ON ("zerver_useractivity"."client_id" = "zerver_client"."id") WHERE "zerver_realm"."domain" = 'zulip.com' ORDER BY "zerver_userprofile"."email" ASC, "zerver_useractivity"."last_visit" DESC
(imported from commit 0c71f4e32fe5a40f4496749dc29ad3463868d55e)
This page shows aggregate activity for a user on various
clients. This allows Waseem to troubleshoot things like users
switching between website and desktop, etc.
This particular page probably won't be used too much, but some of the
logic is gonna be reused in the per-realm activity pages.
(imported from commit b8c1fad5bfa45daab40954f92319f6f89a3fa433)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
We were using Gravatar for user avatars, but now users can
upload their avatars directly to Zulip, and we will store
their avatar for them. This removes the old Gravatar-related
interface and polling code.
This commit does not attempt to update the avatars in
messages that have already been loaded, either for the user
making the change or other users.
(imported from commit 301dc48f96f83de0136c93de57055638c79e0961)
The "Your Account" and "Notifications" boxes on the Settings
page each had their own border and their own "Save changes"
button, but they were within the same form and sending to the
same back end point.
This commit creates a separate form and endpoint for each
of the two boxes.
(imported from commit 04d4d16938f20749a18d2c6887da3ed3cf21ef74)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
Use rest_dispatch for upload auth redirect so it doesn't send the
long URL to user_activity.
(imported from commit ab327bbd529412e43eee6d109f8550180544dbbb)
Trac #1734
This is implemented by bouncing uploaded file links through a view
that checks authentication and redirects to an expiring S3 URL.
This makes file uploads return a domain-relative URI. The client converts
this to an absolute URI when it's in the composebox, then back to relative
when it's submitted to the server.
We need the relative URI because the same message may be viewed across
{staging,www,zephyr}.zulip.com, which have different cookies.
(imported from commit 33acb2abaa3002325f389d5198fb20ee1b30f5fa)
There seems to be some sort of bug involving PhantomJS and XHR
streaming messages. When successive pages are loaded that use XHR
streaming, PhantomJS seems to think the second one never finishes
loading and therefore hangs.
(imported from commit db93b4cab816f1fdc3f3f543c9394b1cba8abedb)
The Mirror and iPhone tabs were either unused or misleading
for realm-specific pages of the /activity report.
(imported from commit 8d0a99eac6657fbfd9e6a32f22739eed66e03fbf)
This fixes the following two closely related tabs:
Integrations by domain
Integrations by client
They now blacklist clients instead of whitelisting them, so
we can see newcomers like Hubot and Giphy bot. Our naming
convention still leaves a lot to be desired.
(imported from commit 66cbd07160d93e4b745a1439261330d854700a5c)
There were a couple of bugs in the security checks that resulted in
IRC mirroring of stream messages not working.
(imported from commit 31ac732461a733c1c993f77356053d4f88c67177)
Previously we were having the database do the matching on sender email
address, which resulted in an unnecessary join.
(imported from commit 70bf791a00b7d5965ef977e45b4a0eccbd3402a0)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
By far the common case for get_old_messages is the home view loading
queries, for which we have raw queries. This patch substantially
improves those queries using the observation that we weren't actually
using the zerver_message table that we were joining with.
I actually expect this to result in a noticable performance
improvement for loading of the homepage.
(imported from commit 12807e5a74eb63275b2523a5f62fd901ab632f0f)
These are some queries on API usage, desktop usage, and
Android usage that would be of interest to Waseem. These
will eventually be subsumed into /activity, but some interim
data issues may make them easier to keep separate for now.
(imported from commit 697a8496cbf4447d557a3fc89f64c1c4d3e67e70)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)
This requires renaming the account in Google Apps at the time we
deploy this; we'll probably want to do this during off hours to avoid
any user-visible downtime.
This also updates some related email addresses.
(imported from commit fce7629b359a4f278bbf7815c8d177a8fa0484fe)
This may require just doing an mv on the home directory, plus changing
the home directory in /etc/passwd. It should of course be done carefully.
(imported from commit 660997d897ee6d33563af74f0fc5d4267a911755)
We want the UserActivity.query field to reflect the name of the
function for REST calls, not the URL, and we accomplish this by
setting request._query to target_function.__name__.
(imported from commit 9df05fef0dffb34483b182b95f8cbc4409083eed)
This results in some small behavior changes. First, if a user
has both malformed JSON and an invalid API key, they will now
be informed of the invalid API key, not the malformed JSON,
because the decorator wrapping code executing first. Second,
we call process_client(), which basically builds us a
UserActivity record with the client "API".
(imported from commit fadb523db9bdc82984bdae61833c5c99f1ebd1c0)
Add the number of person-minutes for the last 24 hours to the
realm report on the main tab of /activity.
(imported from commit 2ff46eacc4c8276ab0407fc6ff9f28f5137f1ed2)
This tab shows how long each user has been on during the last 24
hours, using data from UserActivityInterval. Much of the code
is borrowed from analyze_user_activity.py, but in this version
we set the time interval to be the last 24 hours and sort by
realm and email. I also ensure that it only executes one
query to get all the data (and there's test coverage for that).
(imported from commit 7a2b80f52679054b03c5f5f42b2cda07d5599432)
Waseem is ok with removing the client-specific tabs on the
main /activity page. This reduces the number of queries from
25 to 1. We might eventually restore some of that logic, but
we will do it more efficiently. A lot of the data for
non-website clients is kind of unreliable, anyway.
The page looks kind of funny with only one tab, but that
will be fixed in the next commit.
(imported from commit 54f08f89d5242ad3e045d8ca0d97b86617c15380)
When we don't already have old messages in cache, we need to
fetch data from the database and create dictionaries for the
cache. This commit makes that process work in 50ms, instead
of 130ms, for the data set in test_bulk_message_fetching(),
which is 602 records. Before this commit we had about 132
microseconds of unnecessary churn per message, because we
were fetching DB fields we didn't need and incurring the cost
of the Django ORM. Now we use values() to get only the columns
we need, and we take advantage of previous commits that make
our code less OO and more function-driven, so we can pass the
values directly to build_message_dict() without having to create
objects.
A couple caveats on this commit:
1) I haven't been able to get good measurements on the overall
effect on get_old_messages_backend(). If you kill the cache to
force DB queries, you introduce noise related to sessions and
user profiles.
2) Look at the long comment in this commit related to
re-rendering messages in this codepath. The problem precedes
this commit.
(imported from commit dcb64aa9416f0e9583355ddd6dc3adfa746b9fc7)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We have a handy bulk_add_subscriptions function to make cases
like this fast, so lets use it.
On my machine this reduces the number of db queries during account_register
from 112 to 66.
(imported from commit 21a6b31d0f229998d095735b8c581a50ca6aab66)
It shows domains and how many active users they have. A
user is consider active if they have done something at least
as active as updating their pointer in the last day. Domains
with no meaningful activity in the last two weeks are excluded
from the report.
(imported from commit 700cecfc7f1732e9ac3ea590177da18f75b01303)