This should fix a problem we've been having with errors downloading
the PhantomJS packages from their original hosting service.
Eventually we should move it to an S3 bucket.
cd2348e9ae broke installing Zulip in
production since it didn't correctly update the puppet configuration
to call the process_queue script using the new argument format.
This commit isn't ideal in that I'd prefer to not require updating
puppet in sync with the actual running code, but we don't have a great
mechanism for doing that.
Fixes#586.
Previously, we needed to update the installation instructions with the
current version of Zulip in production every time we did a release,
which was kinda a pain (and hadn't happened since 1.3.6).
Fixes#576.
[commit message details expanded by tabbott]
The original logic for incremental presence list updating from
668d0d9dfa incorrectly attempted to
insert the user 1 spot later than its proper index in the listing.
Now that we're doing presence updates in a performant fashion, we
don't need to throttle processing these events, and in fact the
throttling of these events created a correctness problem, since we're
now doing incremental updates rather than just rerendering everything
after each event.
The code in 668d0d9dfa for removing an
existing user from the user list to update the status didn't correctly
quote the email address of the user in its jquery selector.
While we already don't link to /terms anywhere on the site, they can still be
accessed if you navigate to /terms directly. Now, those routes will only be
exported on the Zulip.com service.
We should ideally provide a mechanism for deployments to specify their own
terms without modifying source code; in the interim, sites that have already
customised the provided Zulip.com terms can simply carry a patch reverting this
commit.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
First user-fasing problem is that when user click to "Collapse" button
of message from narrowed list, buttons "Uncollapse" and "[More...]" does
not work. Second, is that when user collapse/uncollapse some message
from narrowed list, the collapsing/uncollapsing of the same message in
home list does not work in appropriate way.
In "popovers.js" there is the function that is called on click to the
buttons "Collapse" or "Un-collapse". It should show and hide body of a
message. If a message list is narrowed, it should show/hide message in
home list too. So, the first problem is that "toggle_row()" in this
function call methods "collapse(row)" or "uncollapse(row)" from
"condense.js" twice (for row and home_row) using condition
"if (message.collapsed)". When it happen the first time, the variable
"message.collapsed" is changed. That is why next call of "toggle_row()"
work incorrectly.
The second problem is that the function in "condense.js" that is
called on click to the button "[More...]" contains no code for
collapsing/uncollapsing message from home list. It just calls
"collapse(row)" or "uncollapse(row)" for row from narrowed list.
Now, functions "collapse(row)" and "uncollapse(row)" get row from
current list and change both messages (from current list and home
list). On-click functions call them just once for making all of needed
message changes. So, when user collapse or uncollapse message from
home or narrowed list it works correctly.
Fixes: #516
Apparently, our event queue garbage collection logic never actually
disconnected any existing handler objects.
We fix this by disconnecting the handler inside cleanup(), adding a
special check to avoid creating a pointless timeout object.
The new Tornado handler tracking logic properly handled requests that
threw an exception or followed the RespondAsynchronously code path,
but did not properly de-allocated the handler in the syncronous case.
An easy reproducer for this is to load a new Zulip browser window;
that will leak 2 handler objects for the 2 synchronous requests made
from Django to Tornado as part of initial state fetching.
This line appears to have been lost in rebasing from the original
implementation of 1396eb7022faec4c2d91553800a35781a96dd5bd; so the
previous fix actually only addressed the issue in a rare exception
case.
The recent Tornado memory leak fix
(1396eb7022) didn't use the correct
variable name for the current handler ID, causing this cleanup code to
fail in the event that a view raised an exception.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
This automatically loads settings, zerver.models.* and
zerver.lib.actions.* when you start `manage.py shell`, which should
save a bit of time basically every time someone uses it.
Fixes#275.