We treat these exceptions the same way we treat fatal errors: report
the error message to our server and then allow the exception to reach
the top level.
We could also override document.onerror, but don't. There are a
couple of ramifications of this:
* Exceptions caused by event handlers directly attached to DOM
elements aren't handled
* Exceptions caused by code at the top level that triggers an error
(such as parse errors in our Javascript files) aren't handled
The reason we don't override document.onerror is because the
document.onerror handler has a limited interface and doesn't receive
the exception object. It only gets the message, file, and line
number of the error. Additionally, exceptions that we allow to
propogate out of blueslip trigger an onerror event when they're never
caught. In order to avoid handling the error twice (once by blueslip
and once by the onerror handler), we'd have to encode the fact that
the error has already been handled in the error message, which is
pretty ugly.
(imported from commit 7f049ae519dc198a9f7cfd41fd5dd18e584bd061)
The new system, called blueslip, makes errors fatal when in debug
mode and only output a message when running in production. In the
future, it could also send user errors back to us automatically.
(imported from commit 1232607c0311e885c8b5a5e8a45ffb28822426e0)
This should substantially improve the repeat-rendering time for pages
with large numbers of tweets since we don't need to go all the way to
twitter.com, which can take like a second, to render tweets properly.
To deploy this commit properly, one needs to run
./manage.py createcachetable third_party_api_results
(imported from commit 01b528e61f9dde2ee718bdec0490088907b6017e)
This code adds a dependency on python-django-auth-openid, installable as
django-openid-auth from PyPI.
On prod, one needs to run a syncdb in order to create the required
tables. A database *migration* is not required, as these are new tables
only.
(imported from commit c902a0df8d589d93743b27e480154a04402b2c41)
This commit just moves time rendering logic to its own file, and does
not make any functionality changes.
(imported from commit d111d03c6abc8d9550fcf65e4f89eab8056d1ed4)
Manual deployment steps: The same Nginx reload as for "Get rid of the
static-access-control mechanism". If deploying both commits at once,
just do it once.
(imported from commit dd8dbbf14b95fce0a4b6f66f462fa0a6b50bfb8c)
Django doesn't use this setting, but South consults it when
inspecting tables for their constraints. The fact that we store our
tables in the 'humbug' schema was causing South to fail to find our
table constraints (it was looking in the 'public' schema) and
therefore throw an exception when we try to remove the unique
constraint in migration 0002.
(imported from commit 4230338a7b78329a759339b2f9fcd277137b7f32)
This was to support get_updates sharding, which we never fully
implemented. We can recommit this change later if we choose to bring
the feature back.
This reverts commit fda2d99d9e9a07951d11fcd9fc61cf229988f471.
(imported from commit aec8203c8d8a94dd6f30089aeee22814d1595fc5)
As a side-effect of customizing the e-mail, this also makes the host
on which the error happened a part of the subject line.
(imported from commit 7d5e9ad108b48fd34528512c5955567119935d4e)
We need this so that we can safely expunge old events without interfering with
the running server. See #414.
(imported from commit 4739e59e36ea69f877c158c13ee752bf6a2dacfe)
Before this is deployed, we need to install rabbitmq and pika on the
target server (see the puppet part of this commit for how).
When this is deployed, we need to start the new user activity bot:
./manage.py process_user_activity
in the screen session on the relevant server, or user_activity logs
won't be processed (which will eventually result in all users getting
notifications about how their mirrors are out of date).
(imported from commit 44d605aca0290bef2c94fb99267e15e26b21673b)
This commit has the effect of eliminating all of the non-UserActivity
database queries from the Tornado process -- at least in the uncached
case.
This is safe to do, if a bit fragile, since our Tornado code only
accesses these objects (as opposed to their IDs) in a few places that
are all fine with old data, and I don't expect us to add any new ones
soon:
* UserActivity logging, which I plan to move out of Tornado entirely
* Checking whether we're authenticated in our decorators (which could
be simplified -- the actual security check is just whether the
Django session object has a particular field)
* Checking the user realm for whether we should sync to the client
notices about their Zephyr mirror being up to date, which is quite
static and I think we can move out of this code path.
But implementation constraints around mapping the user_ids to
user_profile_ids mean that it makes sense to get the actual objects
for now.
This code is not what I want to do long-term. I expect we'll be able
to clean up the dual User/UserProfile nonsense once we integrate the
upcoming Django 1.5 release, with its support for pluggable User
models, and after that I change, I expect it'll be fairly easy to make
the Tornado code only work with the user ID, not the actual objects.
(imported from commit 82e25b62fd0e3af7c86040600c63a4deec7bec06)
This was done using instructions provided by the South authors:
<http://south.readthedocs.org/en/0.7.6/convertinganapp.html>
This adds a dependency on python-django-south >=0.7.5. Now when you are
reinitializing the database, you need to run "./manage.py migrate --all"
before running populate_db.
When deploying this commit onto existing servers, you need to run these
commands manually:
./manage.py syncdb
./manage.py migrate zephyr 0001 --fake
./manage.py migrate confirmation 0001 --fake
These do *not* need to be run on new databases, only on existing ones.
(imported from commit f24cff421a6be9ab9cf4c4342565c484ac336e2d)
This fixes a problem where if you were 1) running in development
mode, 2) had populated the database from production data, and 3)
tried to log in with an account that had changed its password, you
wouldn't be able to. The problem was that the password change
created a password change record with a PBKDF2 hash, not a SHA1 hash.
This change lets the dev server accept PBKDF2 hashed passwords, but
still use SHA1 password hashes for creating test users for speed.
(imported from commit 2840d266f93add1edbba7f93a7f1491372fc8cf1)
If we have other pages that require login, we might want them to redirect to
the login form. But the root of the site should take you to /accounts/home --
but only after we launch the product.
(imported from commit b5d10e1c908f1ffe1ee68c2689691ca66c896786)
The client may now optionally send its current pointer during
get_updates and the server will return the latest pointer if it
differs and was updated more recently by a different session.
(imported from commit e43b377d7dfb52f83cefb0b1003863d5407caf80)
Without this change, one can only create a few users per second(!),
which really puts a damper on quickly importing old messages.
(imported from commit 26daf61b57154daa067db3daf8254c12d23da353)
We add a few templates for django-confirmation. We define a
"PreregistrationForm" which is validated by accounts_home, which then
generates a confirmation object and emails the user. This required creating
a new table for a PreregistrationUser with an email and status (confirmed)
field.
The register function now no longer accepts a "email" field in the form
and deals only with confirmation IDs to determine the email used to sign
up a user.
(imported from commit 4fcde04530aa7ad4de84579668daee7290b424ac)
Even though SQLite is the default, Django tries to import MySQLdb,
which on OS X is challenging to install.
(imported from commit 0947c86e5e9a1fbf2ff8d74b78f297ff939ff712)