Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
The previous implementation didn't work because HomepageForm rejected
the email as not having a domain. Additionally, the logic in
accounts_register didn't work with Google auth because that code path
doesn't pass through accounts_home. Since whether there's a unique
open realm for the server is effectively a configuration property, we
can fix the bug and make the logic clearer by moving it into the
"figure out the user's realm" function.
The browser registers for events via loading the home view, not this
interface, and this functionality is available via the API-format
register route anyway.
This makes it possible to use DevAuthBackend when doing
performance/scalability testing on Zulip with many thousands of users.
It's unlikely that anyone testing this backend will find it valuable
to have more than 100 login buttons on the same page, and if they do,
they can always just change this limit.
Thanks to @dbiollo for the suggestion!
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
In b59b5cac35, we upgraded our Google
Oauth code to support new python-requests, but because Ubuntu precise
still has old python-requests, this broke the codepath for older
systems.
requests 1.0 changed response.json attribute to response.json()
instancemethod. The code wasn't updated to match that change,
causing a TypeError when attempting to use the Google OAuth
Authenticator backend.
This is fixed simply by using response.json() instead of response.json.
These features are in most cases possible to setup directly via our
GitHub services integration UI, and the customers aren't using Zulip
anymore, so this is worth doing to clean up the code.
(imported from commit 1e6f4ec523d85b6233a8e5b4eaa13eacfbe6e5f4)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Meant to be used in tandem with the manage.py import command.
The following sensitive data is scrubbed:
* user api keys
* user password hashes
* stream email keys
* invite-only streams
* messages from invite-only streams
* messages from users from other domains
(imported from commit 8e58dcdcb80ef1c7127d3ab15accf40c6187633f)
Now we have 2 different Zulip apps out there, and they are signed with
two certs: Zulip and Dropbox. The Dropbox-signed apps are going to need
to be sent APNS notifications from the appropriate APNS connection
(imported from commit 6db50c5811847db4f08e5c997c7bbb4b46cfc462)
Pages from MP are using the description field not the subject field.
Include both in the page if given and don't fail if the key is missing.
(imported from commit 4351e5656d4ea025a03c07c8bb3bb5d406ef2d3d)
The SSO flow which was never used on a realm with mirror dummies before.
Also change the redirect to stay on the same doain.
(imported from commit 0f1b8a8fcef82ae6eaa5a264686f98d62a683fac)
This commit should only be pushed to stage after c290b630e has been
pushed to prod otherwise it will create a redirect loop.
(imported from commit 408407b845ded596705b1abd8ad13c0aedf6d732)
We were trying to default the users first name when using google auth,
but it was getting lost when rendering the form.
(imported from commit 710e0c2ce591488920458dca74209c75e7031abd)
This change will redirect armooo@dropbox.com from stage to prod. It also
removes the prod to stage redirect for all users. This will be rolled
out in two commits to prevent a redirect loop.
(imported from commit c290b630e746f757429b8bbdadbe7768367a5e33)
We were serving 401s on /user_uploads when the user wasn't authenticated (due to
it being a REST endpoint). This was causing a login popup to display instead of
just a broken image preview.
(imported from commit 62640f5bd59eb3b86ab5aae5923ccfa742459805)
We were expecting Github to send us the string "true" when the exclude_* options
were set. However, we were actually getting "1" when an option was set and the
empty string when unset. So we were always setting the options to False.
(imported from commit 067ba60b0b0404aebc6eda9487b1201fc2764243)
Known issues:
* No support for whitelabeling in the email
* No whitelabeling for any externally-visible branding
(imported from commit 9eab7b0744e56a87007b8621a8bb18bbb1080256)
The default today is to not have issues traffic except on a whitelist. This is despite the fact that we have
an exclude_issues boolean on Github's Zulip-integration page, since if we changed the default, all realms
currently using this default would have to go make this change on every repo. That's something that would require
some work, in terms of communicating with them about this, and logging integrations settings for all realms, to
see which are correctly setting exclude issues. Unfortunately this probably isn't high priority today, but let's
try to get this whitelist change out to prod ASAP.
(imported from commit 256fe32bb6aaf7de18ff02d8d7e204a13bc02b7a)
Display a red warning box to get users to direct users to staging for
the zulip.com (dropbox) realm.
(imported from commit 01ad4209d9247406bc82f5dedaf21371101a1d84)
URLs with a realm of "unk" will be queried against the new bucket to
determine the relevant realm of the uploading user.
(imported from commit 5d39801951face3cc33c46a61246ba434862a808)
Otherwise the user_profile.backend attribute doesn't get set. I didn't notice
this previously because on first register authenticate() gets called, and then
the UserProfile object gets cached. This means that subsequent logins work just
fine as long as the UserProfile object is in memcached.
(imported from commit 834d95c46aa07724ea84802f09b7249de99b5ca8)
CUSTOMER16 wants their employee realm to:
* only use JWT logins
* have name changes be disabled (they want users' full names to be the
their CUSTOMER16 user name).
* not show the suggestion that users download the desktop app
(imported from commit cb5f72c993ddc26132ce50165bb68c3000276de0)
We currently expect the use of HMAC SHA-256, although there shouldn't be
anything preventing us from using other algorithms.
(imported from commit 354510a0b7e9e273d062a1ab5b2b03d4a749d6a3)
We can't just check that the realms are the same because ist.mit.edu is an open
realm and uses @mit.edu email addresses.
(imported from commit 7dbaa81cea6e4f82563dfc0cfe67a61fe9378911)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
Allow bot owners to set which streams their will receive events for
without needing to change a configuration file.
(imported from commit 2b69e519dbc12ffbdba072031a7f7196c9e50e33)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
Github flags pushes as either `forced` or not. However, it always marks new branches as
forced pushes--but we don't necessarily agree with them. This commit checks for the `created`
flag as well.
This resolves Trac #2346
(imported from commit 960bd3ad707a4d1ad431e21dcd79389e8d4b297b)
This includes removing GET support for the endpoint, which is unused
and doesn't map well to this being a bulk endpoint.
(imported from commit 348ff9dfa84be1661368c6d7d35aebf2ae2a9ae0)
They have weird properties like not sending anything for unchecked
boxes, which makes it hard to wrap a client-agnostic API around.
(imported from commit fef73a57a55b218b55dab6be3453dd6eac73c789)
If we call exclude_muting_conditions() with a non-stream
narrow, it will now include a condition to exclude streams
that are not in your home view. As of now, this code only
executes during testing, but it sets the stage for doing
better in:home queries on the back end.
(imported from commit bbd764bd0e9588a50e4a82c915e82a2c1b99d73e)
If we are already narrowing to a stream, then we can disregard
muted topics in all the other streams and create a simpler query
for the DB to execute.
(imported from commit 35a074a76eec99922034a381741355da3fdd5b39)
Due to the way we store muted topics, it is possible that a
muted topic stream name may no longer exist, and we need to
handle that case gracefully.
(imported from commit 4d18ec55e45213657a67e160848229678f212765)
Previously, we assumed that num_before or num_after would be always be non-zero
after adjustment for the anchor. However, we don't adjust num_before or
num_after when a narrow is specified.
(imported from commit 9239fef140e109b11bdfbeef42e9fbed78660ad1)
All usages of json_to_dict were replaced with the check_dict
validator. The check_dict validations can eventually be
extended to validate the keys and values of incoming data,
but now we just use check_dict([]) in all the places where
we had json_to_dict, which means we aren't checking for any
specific keys; we are just making sure it's a dictionary.
(imported from commit fc5add9a7ef149dfac2a9a6d9a153799c4c0c24d)
Behind a feature flag you can now do searches like this:
-pm-with:othello@example.com is:private
The "-" in front of "pm-with" tells us to exclude messages
with Othello from our search. We support "-" in front of
all operators, although the behavior for "-search:" and
and "-near:" doesn't really change in this commit.
Note that the filtering out of "negated" predicates only
happens on the client side in this commit. On the server
side we ignore negated predicates and send back a superset
of the results.
(imported from commit 6cdeaf32f2d493fbbb838630f0da3da880b1ca18)
This commit doesn't change any functionality, and it is
designed to make diffs for upcoming changes related to
negated conditions a bit easier to read. This diff
looks a bit noiser than it really is due to some
reindentation of continuation lines.
(imported from commit 64c1cba98faa4bad4eaad122dd3de119caa880c0)
The narrow_parameter convert now converts tuples of
(operator, operand) into dictionaries so that downstream
functions/classes deal in dictionaries. This
affects get_old_messages_backend, messages_in_narrow_backend,
and NarrowBuilder.
(imported from commit 7e8cb887f7872ec687acd8c4857d1d5222ab0d5f)
This implementation is somewhat hackish in large part because I think
we're going to be wanting to redo the get_old_messages API somewhat
soon, and this may naturally become a lot cleaner as a result, but
this isn't a lot of code and fixes#2235 part (A) and substantially
mitigates #1510.
(imported from commit 47a2160a44befa9d83190c5cc95b90e92cc5b4cc)
This helps our iOS app when authenticating via Google Apps, since we
don't get the users' email address when we get the ID token from Google.
(imported from commit 066639958c1e8f7845505ebdabc37282defca5c5)
Instead of having home() set page_params.realm_name directly from
the user_profile object, have fetch_initial_state_data() set it.
This is more consistent with how we treat other data, and it protects
us against a race condition where realm name updates arrive during
the DB fetching.
(imported from commit 545e3bd73f150438126e3f941e9bebc7aa1d0614)
Apparently (according to our error logs) it's possible for there to be a "position" but no "line" [number]
on a commit comment. According to the docs, line numbers are deprecated, although they're probably
more useful than diff line number (aka 'position').
(imported from commit d48f9efbe42293c9585442bd521b1843042eca65)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
This matches page_params.unsubbed_info, plus it sets up to
add something like page_params.stream_dict without being confusing.
(imported from commit 2d40deb779e5c7a488d6952560b4119094bbc0d8)
Before deploying to staging, create the tutorial bot:
email: welcome-bot@zulip.com
name: Zulip Welcome Bot
(imported from commit 2f337a00ffac888b121975bdb95a89cf2f8ab3a7)
Refactor github webhook to handle multiple payload verions
split github fixtures into v1 and v2 versions
Group together all realm-specific logic. When v2 becomes available, we can
ask someone in each org to make the changes via the Github Hook configuration, and
slowly remove the special cases.
TODO: when our pull request for github-services gets merged, the integrations page
should say to look for Zulip instead of Humbug
(imported from commit 4790a730010b37186640f9996291afa6e8f96c2b)
Add a test sending new stream notifications to realms with a
notification stream and fix a bug in building the subscribe button
markdown.
(imported from commit 37985d8c0603ae206bef34b9522231c00bc8c572)
When new streams are created we now send a message with a custom
markdown tag that renders a subscribe button.
(imported from commit 9dfba280b3b4ff4f32f6431ef9227867c8bf4b40)
Normally github gives us a past tense version of the action, but
not for "synchronize," so we fix up the tense.
This also adds zerver.GithubHookTests.test_pull_request_synchronize
(imported from commit ef69467ed4a02dbfa94c8215fb9043b668d1dec9)
When folks closed issues or pull requests on github, we were
using the wrong field from the github payload. Now we
correctly use the "sender" as the person doing the action.
(imported from commit 82989ab19b32f8e3f0bbff9b305a7cb2673d99e9)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
If a user is not allowed to create new streams, then do not
show the "Create new stream" UI at the top of the settings page.
(imported from commit b97626938d8b612317c2189f7eca0d4bd27fc274)
Note that this doesn't actually restrict anybody yet, but it
makes it so that UserProfile.can_create_streams must return True
for a user to create a stream. We can modify that in the future
to have special behavior for realms that want more restrictions.
(imported from commit 432e85b1ca86aaee4a6bd1d4a6d0b2c78ecb0863)
Add back end for admins to assign/remove admin permissions for other users.
The /json/users/<email> endpoint allows you to PATCH is_admin.
(imported from commit bb5e6d44d759274cc2a7cb27e479ae96b2f271b5)
Previously we only disabled it for 'is:starred'. This caused a backtrace when
someone searched for, say 'stream:test is:mentioned', because we weren't joining
on zerver_usermessage, which meant the flag wasn't available to filter on.
(imported from commit ba19f8a74b21d60b89dfc8dbe9c8458ed86b423b)
Zendesk works a lot like desk.com, it has triggers which use targets.
The triggers have a user defined template. Targets can also have place
holders that are posted, we add the ticket id and title here so we can
always construct the message subject.
(imported from commit 04e8e5c7c0fc5568201f252546f6ed42f282fd00)
The normal code will now generate SQL that is basically identically to the raw
SQL we were using before.
(imported from commit 84a3971d6137d05ef3f71252278afdd59041e86a)
This commit also includes a few changes in the way we do some queries that
should speed searches up:
* Messages before and after the anchor are fetched in a single query by doing a
UNION ALL on the server. This incurs the cost of an extra sort because UNION
ALL does not guarantee the order that the results are appended, but it saves a
server round-trip.
* Searches involving flags now use a straight froward WHERE clause, which is
much faster than the one that django-bitfield generates (due to limitations of
the Django ORM)
(imported from commit a0db811a9073363cfabcf4b035d02d20dc8fc8a4)
This requires the tsearch_extras Postgres extension. To install the extension,
first install postgresql-9.1-tsearch-extras on both postgres-primary and
postgres-secondary (this would normally be done in a puppet apply, but there are
currently some changes that can't be applied on Postgres machines). Then run
the following as the postgres user on postgres-primary:
$ psql -d zulip -c 'CREATE EXTENSION tsearch_extras SCHEMA zulip;'
In dev environments, you must also run:
$ psql -d zulip_test_template -c 'CREATE EXTENSION tsearch_extras SCHEMA zulip;'
(imported from commit ad0a57c455b3b86002191ac5fb705d8f716f3296)
This is used by the Android app to authenticate without prompting for a
password.
To do so, we implement a custom authentication backend that validates
the ID token provided by Google and then tries to see if we have a
corresponding UserProfile on file for them.
If the attestation is valid but the user is unregistered, we return that
fact by modifying a dictionary passed in as a parameter. We then return
the appropriate error message via the API.
This commit adds a dependency on the "googleapi" module. On Debian-based
systems with the Zulip APT repository:
sudo apt-get install python-googleapi
For OS X and other platforms:
pip install googleapi
(imported from commit dbda4e657e5228f081c39af95f956bd32dd20139)
Previously we unconditionally showed the "get the desktop app"
banner. Now, if the first user declines to invite people as part of
their onboarding workflow, show the invite banner instead.
(imported from commit f7892fef17c923154a700149b8f5be99e9c03fa0)
We currently only do bulk invites when the first user in the realm
goes through the signup process, so this will help us know if that
step is effective for getting more early users into the app.
(imported from commit c846086185ed28b13d3d4b695a9c8cad913d3bc9)
Before this is deployed to prod, we need to manually frob our database
to set the is_mirror_dummy=True bit for all existing mirror users.
(imported from commit 39f1938cef091cf1d7d97307f76b137fe1d92b6c)
NarrowBuilder.by_stream and NarrowBuilder.by_topic for mit users uses a
regex to search by stream and topic. Python's re.escape escapes unicode
in a format that postgres can not parse. We escape unicode as '\uXXXX'
for postgres.
(imported from commit d2c27d4514c31fdc6ef1fea898fe721a6f0ab069)
After deploying to both staging and prod, double check the docs
are correct here. This fixes the API docs on prod, which had
"POST /api/v1/messages", despite "/api" not being part of the
prod path. Prod docs are here:
https://zulip.com/api/endpoints/
(imported from commit a2c4d316128f88171f4a76074314be64d9bc9728)
Features:
* Only shows messages in the narrow
* New messages in the narrow will arrive as they are sent
* Works even for streams you're not subscribed to
* Automatically subscribes you to a stream on send
* Doesn't update your pointer
* All searches etc. automatically have the narrow added
(imported from commit 2e12b76849f6ca0f53dda5985dad477a04f7bbac)
Make sure that principles is a list a of strings (unless it
is None). This includes a unit test.
(imported from commit c2e3f1c0cafc207ceca67d5a174ef4e29a32c6ca)
This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
As far as I can tell, we don't actually use this value, but better to
have it be clear.
(imported from commit 3655b87f28b0554ee3db0acb2c0d59543dd093a1)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
This prevents us from failing if the first or last name is unset.
We fall back to None, which will allow the user to set their name even
if real names are restricted, which is probably better than forcing them
to have no name.
Closes trac #2118.
(imported from commit 1ff8a55022f3a3baf67575b593a679e21c0f3194)
And in the meanwhile, comment what's going on so that we don't break
this in refactoring again later.
(imported from commit a3119cd1eab3d54cb1883f2c8cad0d147cb04ba7)
Currently all of our realms we intend to create are created manually,
and regardless do_create_realm is the correct way to create a realm.
(imported from commit 42280aff461aa17ffee22ab1c7b7f43757648eec)
I'd also like to add a database table to actually store the values
that we get out of this and our send message requests for future
inspection, but for now, grepping logs+statsd is good enough.
(imported from commit 99ef179651850217fe6e82c5e928d122ca91101e)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
Until we can add a banner to help users subscribe, it may be confusing to
narrow to a stream where you are not subscribed.
Partial revert of 390bdef
(imported from commit ea75fc59b979589b975465a3fecffea0f014fcf6)
It's a little weird that these still open in a new tab, but it might
be best to keep them consistent with all other links?
This is a first pass on Trac #1927.
(imported from commit 390bdef790a83af4240ad5f5a82e572ef5824756)
If authoritative data is available from say the LDAP database, we now
ignore the POSTed user name, and don't offer it as a form field.
We fall back to giving the user a text field if they aren't in LDAP.
If users do not have any form fields to fill out, we simply bring them
to the app without the registration page, logging them in using a dummy
backend.
(imported from commit 6bee87430ba46ff753ea3408251e8a80c45c713f)
The latter doesn't depend on the former; we can still fill in your full
name even if you didn't authenticate via LDAP.
This commit requires django_auth_ldap to be installed. On Debian
systems, you can do so via APT:
sudo apt-get install python-django-auth-ldap
On OS X, use your favourite package manager. For pip, I believe this
will work:
pip install django_auth_ldap
django_auth_ldap depends on the "ldap" Python package, which should be
installed automatically on your system.
(imported from commit 43967754285990b06b5a920abe95b8bce44e2053)
This should address user reports of huge bankruptcy counts even when
they are relatively caught up. The root issue is that we sometimes
don't mark messages as read for some reason.
(imported from commit 8799305a8665f9ee239575e6e95f603f89c1d427)
UserProfile.show_admin was intended to be a check for users that have
administrative rights in other realms, which we've harmlessly but
erroneously been using to check if they are an admin in their realm.
Use the more straightforward check instead, with a more intuitive
name.
(imported from commit d81050c7dbbb19e59c5e31750be303a4630e1456)
This fixes a problem where the desktop app would attempt to load
https://zulip.akam.ai:8888/ after authenticating the user, which fails
with CSS issues.
We should probably, separately, change our Django-under-apache to only
serve the one URL that it needs and redirect the rest back to
Django-under-nginx.
(imported from commit 3e3251863618269790f61b371e88af57b6cfb272)
Errors are sent to a queue processor that posts them to staging,
just like the feedback bot.
(imported from commit 4a8d099672a1b3e48a8bc94148d8b53db73d2c64)
The /avatar/<email> URL redirects to the appropriate
avatar URL for an email, whether it's hosted by Gravatar
or Zulip. (This will work even for external users, as
it falls through to Gravatar.)
(imported from commit 7e6f226659cb2e5a7f6426da0be8aa9bae9cff14)
The Freshdesk API is bonkers, but we do the best we can with it to
support notifications on ticket creation and ticket updates.
(imported from commit 2023622b274ef83f4e1544d0df286fe2e68581b3)
This is the amount of time between when it is sent, and when it is
rendered into the user's home view.
(imported from commit 468c28e77ba16c7256c359e90ab5aacf9d497585)
The main Activity page counts users as active if they have either
sent a message or updated a pointer. In the unlikely event that
somebody sent a message but never updated their pointer, we were
undercounting them, if they went through send_messages_backend.
(imported from commit 5f112be87a239980c38a18c13f9cd68e90d2e905)
This should help with determining the prevalence of slow sends as
experienced by users.
(imported from commit f00797679315c928af3c87ad8fdf0112f1dfa900)
The "desktop" counts aggregate all desktop clients, but on the
Clients tab, we are only interested in specific versions.
(imported from commit eea2d8da584a6fa32fa1f3a2bae71ef5daaba738)
This report will eventually replace the per-realm report that is
now accessible through /activity. In order not to disrupt Waseem,
I'm leaving the old reports around until we've polished the new
ones.
The old report does 24 different queries to get per-realm user data.
The new approach gets all the data at once, and it slices and dices
the data in Python to accomodate our slightly quirky data model.
On localhost, this is a typical query:
LOG: duration: 5.668 ms statement: SELECT "zerver_useractivity"."id", "zerver_useractivity"."user_profile_id", "zerver_useractivity"."client_id", "zerver_useractivity"."query", "zerver_useractivity"."count", "zerver_useractivity"."last_visit", "zerver_userprofile"."id", "zerver_userprofile"."email", "zerver_client"."id", "zerver_client"."name" FROM "zerver_useractivity" INNER JOIN "zerver_userprofile" ON ("zerver_useractivity"."user_profile_id" = "zerver_userprofile"."id") INNER JOIN "zerver_realm" ON ("zerver_userprofile"."realm_id" = "zerver_realm"."id") INNER JOIN "zerver_client" ON ("zerver_useractivity"."client_id" = "zerver_client"."id") WHERE "zerver_realm"."domain" = 'zulip.com' ORDER BY "zerver_userprofile"."email" ASC, "zerver_useractivity"."last_visit" DESC
(imported from commit 0c71f4e32fe5a40f4496749dc29ad3463868d55e)
This page shows aggregate activity for a user on various
clients. This allows Waseem to troubleshoot things like users
switching between website and desktop, etc.
This particular page probably won't be used too much, but some of the
logic is gonna be reused in the per-realm activity pages.
(imported from commit b8c1fad5bfa45daab40954f92319f6f89a3fa433)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
We were using Gravatar for user avatars, but now users can
upload their avatars directly to Zulip, and we will store
their avatar for them. This removes the old Gravatar-related
interface and polling code.
This commit does not attempt to update the avatars in
messages that have already been loaded, either for the user
making the change or other users.
(imported from commit 301dc48f96f83de0136c93de57055638c79e0961)
The "Your Account" and "Notifications" boxes on the Settings
page each had their own border and their own "Save changes"
button, but they were within the same form and sending to the
same back end point.
This commit creates a separate form and endpoint for each
of the two boxes.
(imported from commit 04d4d16938f20749a18d2c6887da3ed3cf21ef74)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
Use rest_dispatch for upload auth redirect so it doesn't send the
long URL to user_activity.
(imported from commit ab327bbd529412e43eee6d109f8550180544dbbb)
Trac #1734
This is implemented by bouncing uploaded file links through a view
that checks authentication and redirects to an expiring S3 URL.
This makes file uploads return a domain-relative URI. The client converts
this to an absolute URI when it's in the composebox, then back to relative
when it's submitted to the server.
We need the relative URI because the same message may be viewed across
{staging,www,zephyr}.zulip.com, which have different cookies.
(imported from commit 33acb2abaa3002325f389d5198fb20ee1b30f5fa)
There seems to be some sort of bug involving PhantomJS and XHR
streaming messages. When successive pages are loaded that use XHR
streaming, PhantomJS seems to think the second one never finishes
loading and therefore hangs.
(imported from commit db93b4cab816f1fdc3f3f543c9394b1cba8abedb)
The Mirror and iPhone tabs were either unused or misleading
for realm-specific pages of the /activity report.
(imported from commit 8d0a99eac6657fbfd9e6a32f22739eed66e03fbf)
This fixes the following two closely related tabs:
Integrations by domain
Integrations by client
They now blacklist clients instead of whitelisting them, so
we can see newcomers like Hubot and Giphy bot. Our naming
convention still leaves a lot to be desired.
(imported from commit 66cbd07160d93e4b745a1439261330d854700a5c)
There were a couple of bugs in the security checks that resulted in
IRC mirroring of stream messages not working.
(imported from commit 31ac732461a733c1c993f77356053d4f88c67177)
Previously we were having the database do the matching on sender email
address, which resulted in an unnecessary join.
(imported from commit 70bf791a00b7d5965ef977e45b4a0eccbd3402a0)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
By far the common case for get_old_messages is the home view loading
queries, for which we have raw queries. This patch substantially
improves those queries using the observation that we weren't actually
using the zerver_message table that we were joining with.
I actually expect this to result in a noticable performance
improvement for loading of the homepage.
(imported from commit 12807e5a74eb63275b2523a5f62fd901ab632f0f)
These are some queries on API usage, desktop usage, and
Android usage that would be of interest to Waseem. These
will eventually be subsumed into /activity, but some interim
data issues may make them easier to keep separate for now.
(imported from commit 697a8496cbf4447d557a3fc89f64c1c4d3e67e70)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)
This requires renaming the account in Google Apps at the time we
deploy this; we'll probably want to do this during off hours to avoid
any user-visible downtime.
This also updates some related email addresses.
(imported from commit fce7629b359a4f278bbf7815c8d177a8fa0484fe)
This may require just doing an mv on the home directory, plus changing
the home directory in /etc/passwd. It should of course be done carefully.
(imported from commit 660997d897ee6d33563af74f0fc5d4267a911755)
We want the UserActivity.query field to reflect the name of the
function for REST calls, not the URL, and we accomplish this by
setting request._query to target_function.__name__.
(imported from commit 9df05fef0dffb34483b182b95f8cbc4409083eed)
This results in some small behavior changes. First, if a user
has both malformed JSON and an invalid API key, they will now
be informed of the invalid API key, not the malformed JSON,
because the decorator wrapping code executing first. Second,
we call process_client(), which basically builds us a
UserActivity record with the client "API".
(imported from commit fadb523db9bdc82984bdae61833c5c99f1ebd1c0)
Add the number of person-minutes for the last 24 hours to the
realm report on the main tab of /activity.
(imported from commit 2ff46eacc4c8276ab0407fc6ff9f28f5137f1ed2)
This tab shows how long each user has been on during the last 24
hours, using data from UserActivityInterval. Much of the code
is borrowed from analyze_user_activity.py, but in this version
we set the time interval to be the last 24 hours and sort by
realm and email. I also ensure that it only executes one
query to get all the data (and there's test coverage for that).
(imported from commit 7a2b80f52679054b03c5f5f42b2cda07d5599432)
Waseem is ok with removing the client-specific tabs on the
main /activity page. This reduces the number of queries from
25 to 1. We might eventually restore some of that logic, but
we will do it more efficiently. A lot of the data for
non-website clients is kind of unreliable, anyway.
The page looks kind of funny with only one tab, but that
will be fixed in the next commit.
(imported from commit 54f08f89d5242ad3e045d8ca0d97b86617c15380)
When we don't already have old messages in cache, we need to
fetch data from the database and create dictionaries for the
cache. This commit makes that process work in 50ms, instead
of 130ms, for the data set in test_bulk_message_fetching(),
which is 602 records. Before this commit we had about 132
microseconds of unnecessary churn per message, because we
were fetching DB fields we didn't need and incurring the cost
of the Django ORM. Now we use values() to get only the columns
we need, and we take advantage of previous commits that make
our code less OO and more function-driven, so we can pass the
values directly to build_message_dict() without having to create
objects.
A couple caveats on this commit:
1) I haven't been able to get good measurements on the overall
effect on get_old_messages_backend(). If you kill the cache to
force DB queries, you introduce noise related to sessions and
user profiles.
2) Look at the long comment in this commit related to
re-rendering messages in this codepath. The problem precedes
this commit.
(imported from commit dcb64aa9416f0e9583355ddd6dc3adfa746b9fc7)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We have a handy bulk_add_subscriptions function to make cases
like this fast, so lets use it.
On my machine this reduces the number of db queries during account_register
from 112 to 66.
(imported from commit 21a6b31d0f229998d095735b8c581a50ca6aab66)
It shows domains and how many active users they have. A
user is consider active if they have done something at least
as active as updating their pointer in the last day. Domains
with no meaningful activity in the last two weeks are excluded
from the report.
(imported from commit 700cecfc7f1732e9ac3ea590177da18f75b01303)
A small functional change here was to eliminate an enormous "Usage"
headline that was already implicit from the tabs. It would have
complicated the refactoring to try to preserve it, and I don't think
anyone will miss it.
Extracting this template will give us a little more flexibility
to customize future tabs in the /activity page.
(imported from commit bdb0b7030c8ec1e20d4451dc059830c3f5ea7632)
We are still showing the same data points, but the logic to drill
down on details for a particular realm is now all server side,
not client side, and we are smarter about omitting fields. In
summary mode, we don't show empty Name or Email columns. In
detailed mode, we show the realm as a headline instead of a column.
In this version you do lose the ability to see all system users in
the same view, but Waseem is ok with this.
(imported from commit edd2e646ab4cf5783ea64232d0cd621debece8d4)
When you load the activity report, it will just show summary
counts for realms, but if you click on a realm, you will see
details about users in the realms. You can also click "Show all"
to see an interleaved view of realms and users.
(imported from commit b106557b1fae64d525071afc124b5a8aed319086)
Add rows to the activity report that roll up counts for all
users on each realm, to go along with individual users.
(imported from commit 8104f3ef7fbe406fe0fd2ba1bb00ce76a1ccbee5)
This includes:
* Merging the pull request and issue subject creation functions
* Factoring out pull request and issue content creation
(imported from commit 9cf90e999482a1998431e6483788522101607167)