This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
This is in some ways a regression, but because we don't have
python-postmonkey packaged right now, this is required to make the
Zulip production installation process work on Trusty.
(imported from commit 539d253eb7fedc20bf02cc1f0674e9345beebf48)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
Now that we've debugged the memory leak, I don't think we need this
anymore.
This reverts commit 1bdc7ee2f72bdebb1cdc94601247834a434614d6.
Conflicts:
puppet/zulip/files/cron.d/rabbitmq-numconsumers
puppet/zulip/files/supervisor/conf.d/zulip.conf
(imported from commit ff87f2aebcbc71013fa7a05aedb24e2dcad82ae6)
Unbundle the push notifications from the missed message queue processors
and handlers. This makes notifications more immediate, and sets things up
for better badge count handling, and possibly per-stream filtering.
(imported from commit 11840301751b0bbcb3a99848ff9868d9023b665b)
This fixes a small memory leak in our queue workers, where we don't
reset the accumulated times contained in our query logging data.
Longer-term, we may want to make something mergable for mainline where
we only store on the connection object the totals; that would be a
fixed amount of emmory per connection and thus not have this problem.
(imported from commit 914fa13acfb576f73c5f35e0f64c2f4d8a56b111)
This command should be run continuously via supervisor. It periodically
checks for new email messages to send, and then sends them. This is for
sending email that you've queued via the Email table, instead of mandrill
(as is the case for our localserver/development deploys).
(imported from commit a2295e97b70a54ba99d145d79333ec76b050b291)
Errors are sent to a queue processor that posts them to staging,
just like the feedback bot.
(imported from commit 4a8d099672a1b3e48a8bc94148d8b53db73d2c64)
We also now separate out the times for the socket overhead, the
request service time, and the queuing delays.
(imported from commit e1683f7f28b968b86ebb701b0ac29b00ac6d67c3)
One quirk here is that the Request object is built in the
message_sender worker, not Tornado. This means that the request time
only counts time taken for the actual sending and does not account
for socket overhead. For this reason, I've left the fake logging in
for now so we can compare the two times.
(imported from commit b0c60a3017527a328cadf11ba68166e59cf23ddf)
The register_json_consumer() function now expects its callback
function to accept a single argument, which is the payload, as
none of the callbacks cared about channel, method, and properties.
This change breaks down as follows:
* A couple test stubs and subclasses were simplified.
* All the consume() and consume_wrapper() functions in
queue_processors.py were simplified.
* Two callbacks via runtornado.py were simplified. One
of the callbacks was socket.respond_send_message, which
had an additional caller, i.e. not register_json_consumer()
calling back to it, and the caller was simplified not
to pass None for the three removed arguments.
(imported from commit 792316e20be619458dd5036745233f37e6ffcf43)