2013-04-16 22:50:21 +02:00
|
|
|
# The form of each line in this file should be as follows:
|
|
|
|
#
|
|
|
|
# output_template (frequency) = method input_pattern
|
|
|
|
#
|
|
|
|
# This will capture any received metrics that match 'input_pattern'
|
|
|
|
# for calculating an aggregate metric. The calculation will occur
|
|
|
|
# every 'frequency' seconds and the 'method' can specify 'sum' or
|
|
|
|
# 'avg'. The name of the aggregate metric will be derived from
|
|
|
|
# 'output_template' filling in any captured fields from 'input_pattern'.
|
|
|
|
#
|
|
|
|
# For example, if you're metric naming scheme is:
|
|
|
|
#
|
|
|
|
# <env>.applications.<app>.<server>.<metric>
|
|
|
|
#
|
|
|
|
# You could configure some aggregations like so:
|
|
|
|
#
|
|
|
|
# <env>.applications.<app>.all.requests (60) = sum <env>.applications.<app>.*.requests
|
|
|
|
# <env>.applications.<app>.all.latency (60) = avg <env>.applications.<app>.*.latency
|
|
|
|
#
|
|
|
|
# As an example, if the following metrics are received:
|
|
|
|
#
|
|
|
|
# prod.applications.apache.www01.requests
|
|
|
|
# prod.applications.apache.www01.requests
|
|
|
|
#
|
|
|
|
# They would all go into the same aggregation buffer and after 60 seconds the
|
|
|
|
# aggregate metric 'prod.applications.apache.all.requests' would be calculated
|
|
|
|
# by summing their values.
|
|
|
|
#
|
|
|
|
# Note that any time this file is modified, it will be re-read automatically.
|
|
|
|
|
|
|
|
# Aggregate all per-bucket memcached stats into a generit hit/miss stat
|
2013-04-19 22:15:43 +02:00
|
|
|
stats.<app>.cache.all.hit (10) = sum stats.<app>.cache.*.hit
|
|
|
|
stats.<app>.cache.all.miss (10) = sum stats.<app>.cache.*.miss
|
2013-05-01 17:33:38 +02:00
|
|
|
|
|
|
|
# Aggregate all per-domain active stats to overall active stats
|
|
|
|
stats.gauges.<app>.users.active.all.<bucket> (10) = sum stats.gauges.<app>.users.active.*.<bucket>
|