Instead we now rely on the request._client value, which we were previously
passing along to s_m_b in all but one case.
For that one case, we just modify the Request object to include the value
beforehand.
(imported from commit 542f38f94bc447149cd4d2efaa5e8f48f756725b)
Addresses a complaint brought up in our usability study.
We now hook into the "show" event on .subscription_settings elements and
do some obnoxious math to move the scrollbar the way we want.
Closes trac #1015.
(imported from commit 5d9cee1ffc242eb7b743fdccd2bd76bf0a7ba060)
This can result in a significant performance benefit because we only
need to update the columns that changed..
(imported from commit 42bef1fcc58ad79bd864f89263fe82e90743ee5b)
The policy this implements is:
* 1 week for most persistent data (Clients, etc.)
* 1 day for messages
(imported from commit d57bb2c6b9626ffa2155c6d0ef9b60827d1f2381)
This saves 2 database queries per user in the huddle when sending the
first message to a particular huddle.
(imported from commit f71aa32df846fb4b82651a93ff9608087ffcaa5a)
Apparently, something in Django 1.5's changes to their default logging
setup resulted in the logger 500 errors (logged in
django.core.handlers.base.handle_uncaught_exception) from reaching the
root logger -- they stopped at propagating at the 'django' logger. We
deal with this by making our logging system handle those events in the
'django' logger ourselves (and making the related changes needed to
ensure that we still log to server.log and the console everything
logged by our own humbug.requests logger and anything that falls
through to the root logger).
This requires updating the mechanism we use in test_settings.py to
silence our request logging, since now the 'humbug.requests' logger is
being re-initialized by the Django logging setup, which runs after
test_settings.py.
While we're at it, set propagate=False in the commented-out
'django.db' logging configuration (previously, queries would be logged
twice).
(imported from commit 32af29084e52be1ba6f92a7952c3a3946925b46b)
This is in addition to only successfully reporting a given error once
per session. Previously, if an error was triggered many times before
the ajax call to report the error returned, we'd end up making many
ajax requests to report the error.
(imported from commit 559179e3c8c3fbf03bbb091a67361d447c80b7bb)
Because we're storing 25,000+ message content objects in memcached on
server restart, we were sometimes exceeding the 64MB cap even during
the restart process. 256MB should be safely large enough to not have
the issue but not so large as to seriously consume resources on our
app frontends (which currently have 7GB of RAM).
(imported from commit 1a64319e50c9dadf0bc65e2e4dbf08f4cc1b9c38)
Also improve display of times passed -- we now use display short times
in milliseconds for easier reading.
(imported from commit 08e1e7e6acbef48453080864946f7602a3395e7c)
Previous we had around 4 copies of the logic for deciding whether we
should publish data via a SimpleQueueClient queue, a
TornadoQueueClient queue, or to directly handle the operation, which
resulted in their getting out of sync and buggy (see e.g. the previous
commit).
We need to add a lock around adding things to the queue to work around
a bug with pika's BlockingConnection.
I should note that the previous logic in some places had a bunch of
tests of the form "elif settings.TEST_SUITE" for doing the work that
would have been done by the queue processor directly; these should
have just been "else" clauses -- since we generally want that code to
run on development environments whether or not the test suite is
currently running.
(imported from commit 16bdbed4fff04b1bda6fde3b16bee7359917720b)
Previously we had several files which initialized SimpleQueueClient()
for sending items to the UserActivity queue, even though those code
paths aren't used outside Tornado. This resulted in slower Tornado
startup times.
(imported from commit ad97021ec18d3927233744037c548c22db33c321)
The actual database query that we use to fill the UserMessage cache
only takes a few hundred milliseconds to run; however the process of
iterating through the results would take 3-5 seconds because the
Django ORM is not very efficient for small tables where we're only
interested in the integer values in a couple columns.
So we can save most of that Tornado startup time by just doing this
one query manually; I left the original query next to it in a comment
so it is easy to keep it all up to date as we change our product.
(imported from commit ac4675bcdda5d812ebfbe211450c85ee2787ee66)
This only does something if DEBUG=False, but it's now required that
you set this on Django 1.5 or the server will silently serve up 500s
for every request (not the best failure mode).
(imported from commit fa226c644770c468d73143c8a49d5d29d282df27)
See http://bugs.python.org/issue5876 for an explanation for why this
is needed -- basically __repr__() needs to return a string, not a
unicode object in Python 2.
This causes problems on Django 1.5 because the more expressive
exception code in model.objects.get() will crash with a __repr__()
containing non-ascii unicode characters.
(imported from commit f44085e67d9d14629b821a29bbf65738f1794d6c)
Checked using the following (relevant for rebasing):
git grep url templates/ | grep -v "'django" | grep -v "'zephyr"
This appears to not have a good backwards-compatability story (well,
there is one involving a %load from the future, but it seems to not
work).
(imported from commit d740831658aa23cadbbb82082ac6a3738d449a1d)