Previously, the UserProfile objects were created in the order
generated by a Set, which meant tests would randomly start failing if
the code that runs before this part of populate_db changed (and thus
caused the Set object used to pass users into bulk_create_users to
have a different order when enumerated).
This fixes the issue in two ways -- one by sorting the users inside
bulk_create_users, and second by attaching subscriptions to users
based on a deterministic ordering.
The restarted Tornado processes seemed to escape the process group and
thus continue running after run-dev.py finished.
While we're at it, we don't need to dump/reload event queues in the
test suite either.
The previous version of sanitize_name dropped all unicode characters
and mangled filenames with multiple `.`s in the extension, leading to
confusing URLs for files uploaded to Zulip.
Fixes#321.
[tweaked significantly by tabbott]
It's always been the case that in production, Tornado dumps all the
event queues when shut down so that they can be reloaded by the
replacement Tornado process. This never worked in development because
the codepath for auto-reload didn't go through either a signal or
sys.exit (it re-execs the process instead).
This meant that we didn't have a mechanism for testing the event queue
dump/load functionality in the development environment. We fix this
by adding such dumping/loading. However, this breaks the automatic
reloading of open browser windows on a server restart, so we add that
back in by adjusting the special `restart` events to pass a special
`immediate` flag when used in development.
This also has the benefit of removing the "Bad event queue" errors one
would get on every file save induced restart on the Python console.
This is a no-op right now, but we'll want the new structure for the
next commit, and splitting this out makes it a lot easier to read what
is actually changed in the next commit.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
Apparently, our event queue garbage collection logic never actually
disconnected any existing handler objects.
We fix this by disconnecting the handler inside cleanup(), adding a
special check to avoid creating a pointless timeout object.
This line appears to have been lost in rebasing from the original
implementation of 1396eb7022faec4c2d91553800a35781a96dd5bd; so the
previous fix actually only addressed the issue in a rare exception
case.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.