mirror of https://github.com/zulip/zulip.git
88a123d5e0
The previous model for these Nagios checks was kinda crazy -- every minute, we'd run a full `rabbitmctl list_consumers` for each of the dozen+ consumers that we have, and then do the exact same parsing logic for each to determine whether the target queue has a running consumer to write out a state file. Because `rabbitmctl list_consumers` takes a small amount of resources, on systems where CPU is very limited (e.g. t2 style AWS instances), this minor CPU wastage could be problematic. Now we just do that `rabbitmqctl list_consumers` once per minute, and output all the state files from a single command. Further TODO items on this front include removing the hardcoded list of queues. |
||
---|---|---|
.. | ||
check-rabbitmq-consumers | ||
check-rabbitmq-queue | ||
cron_file_helper.py |