This fixes a problem where the requests to Tornado would attempt to
use a configured outgoing HTTP proxy, when really we want to connect
directly to localhost.
Fixes: #468.
Originally this cache was used to transmit data from Django to Tornado
(and also for general message caching purposes), but now nothing
actually reads from this cache, so we can eliminate it.
The restarted Tornado processes seemed to escape the process group and
thus continue running after run-dev.py finished.
While we're at it, we don't need to dump/reload event queues in the
test suite either.
It's always been the case that in production, Tornado dumps all the
event queues when shut down so that they can be reloaded by the
replacement Tornado process. This never worked in development because
the codepath for auto-reload didn't go through either a signal or
sys.exit (it re-execs the process instead).
This meant that we didn't have a mechanism for testing the event queue
dump/load functionality in the development environment. We fix this
by adding such dumping/loading. However, this breaks the automatic
reloading of open browser windows on a server restart, so we add that
back in by adjusting the special `restart` events to pass a special
`immediate` flag when used in development.
This also has the benefit of removing the "Bad event queue" errors one
would get on every file save induced restart on the Python console.
This is a no-op right now, but we'll want the new structure for the
next commit, and splitting this out makes it a lot easier to read what
is actually changed in the next commit.
Apparently, our event queue garbage collection logic never actually
disconnected any existing handler objects.
We fix this by disconnecting the handler inside cleanup(), adding a
special check to avoid creating a pointless timeout object.
This line appears to have been lost in rebasing from the original
implementation of 1396eb7022faec4c2d91553800a35781a96dd5bd; so the
previous fix actually only addressed the issue in a rare exception
case.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
At present, we only do a few simple checks on the client type inside
the event system, and this saves database/memcached queries.
Note that this preserves the structure of the marshalled name in
to_dict/from_dict as client_type to avoid an unnecessary migration.
Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
This commit is somewhat ugly, but its purpose is to be early
preparation for splitting Tornado into a queue server and a frontend
server, and this code belongs, by and large, in the queue server
component.
Features:
* Only shows messages in the narrow
* New messages in the narrow will arrive as they are sent
* Works even for streams you're not subscribed to
* Automatically subscribes you to a stream on send
* Doesn't update your pointer
* All searches etc. automatically have the narrow added
(imported from commit 2e12b76849f6ca0f53dda5985dad477a04f7bbac)