Skip to content

Metrics and Tracking

Turn raw log streams into actionable numbers. This tutorial walks through Kelora's metrics pipeline, from basic counters to custom summaries that you can export or feed into dashboards.

What You'll Learn

  • Track counts, sums, buckets, and unique values with Rhai helpers
  • Combine --metrics, --stats, --begin, and --end for structured reports
  • Use sliding windows and percentiles for latency analysis
  • Persist metrics to disk for downstream processing

Prerequisites

  • Completed the Quickstart
  • Familiarity with basic Rhai scripting (--filter, --exec)

Sample Data

Commands below use fixtures from the repository. If you cloned the project, the paths resolve relative to the docs root:

  • examples/simple_json.jsonl — mixed application logs
  • examples/window_metrics.jsonl — high-frequency metric samples
  • examples/web_access_large.log.gz — compressed access logs for batch jobs

All commands print real output thanks to markdown-exec; feel free to tweak the expressions and rerun them locally.

Step 1 – Quick Counts with track_count()

Count how many events belong to each service while suppressing event output.

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  --metrics
kelora: Tracked metrics:
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3

--metrics prints the aggregated map when processing finishes. Use this pattern any time you want a quick histogram after a batch run.

Showing Stats at the Same Time

Pair --metrics with --stats when you need throughput details as well:

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -m --stats
kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -m --stats
kelora: Tracked metrics:
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3

kelora: Stats:
Lines processed: 20 total, 0 filtered (0.0%), 0 errors (0.0%)
Events created: 20 total, 20 output, 0 filtered (0.0%)
Throughput: 11269 lines/s in 1ms
Time span: 2024-01-15T10:00:00+00:00 to 2024-01-15T10:30:00+00:00 (30m)
Timestamp: auto-detected timestamp — parsed 20 of 20 detected events (100.0%).
Levels seen: CRITICAL,DEBUG,ERROR,INFO,WARN
Keys seen: attempts,channel,config_file,downtime_seconds,duration_ms,endpoints,free_gb,freed_gb,ip,job,key,level,max_connections,memory_percent,message,method,partition,path,query,reason,schedule,script,service,severity,size_mb,status,target,timestamp,ttl,user_id,username,version

--stats adds processing totals, time span, and field inventory without touching your metrics map.

Step 2 – Summaries with Sums, Buckets, and Averages

Kelora ships several helpers for numeric metrics. The following example treats response sizes and latency as rolling aggregates.

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_sum("response_bytes", to_int_or(e.get_path("bytes"), 0))' \
  -e 'track_avg("response_time_ms", to_int_or(e.get_path("duration_ms"), 0))' \
  -e 'if e.has_path("duration_ms") { track_bucket("slow_requests", clamp(to_int_or(e.duration_ms, 0) / 250 * 250, 0, 2000)) }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_sum("response_bytes", to_int_or(e.get_path("bytes"), 0))' \
  -e 'track_avg("response_time_ms", to_int_or(e.get_path("duration_ms"), 0))' \
  -e 'if e.has_path("duration_ms") { track_bucket("slow_requests", clamp(to_int_or(e.duration_ms, 0) / 250 * 250, 0, 2000)) }' \
  --metrics
kelora: Tracked metrics:
response_bytes = 0
slow_requests = #{"0": 1, "2000": 2}
  • track_sum accumulates totals (suitable for throughput or volume).
  • track_avg automatically maintains a running average per key.
  • track_bucket groups values into ranges so you can build histograms.

Buckets show up as nested maps where each bucket value keeps its own count.

Step 3 – Unique Values and Cardinality

track_unique() stores distinct values for a key—handy for unique user counts or cardinality analysis.

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_unique("services", e.service)' \
  -e 'if e.level == "ERROR" { track_unique("error_messages", e.message) }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_unique("services", e.service)' \
  -e 'if e.level == "ERROR" { track_unique("error_messages", e.message) }' \
  --metrics
kelora: Tracked metrics:
error_messages = ["Query timeout", "Account locked", "Service unavailable"]
services     = ["api", "database", "cache", "auth", "scheduler", "disk", "monitoring", "admin", "health"]

Use metrics["services"].len() later to compute the number of distinct members.

Step 4 – Sliding Windows and Percentiles

Enable the window buffer to examine recent events. The example below tracks a five-event moving average and P95 latency for CPU metrics.

kelora -j examples/window_metrics.jsonl \
  --filter 'e.metric == "cpu"' \
  --window 5 \
  -e $'let values = window_numbers(window, "value");
if values.len() > 0 {
    let sum = values.reduce(|s, x| s + x, 0.0);
    let avg = sum / values.len();
    e.avg_last_5 = round(avg * 100.0) / 100.0;
    if values.len() >= 3 {
        e.p95_last_5 = round(values.percentile(95.0) * 100.0) / 100.0;
    }
}' \
  -n 5
kelora -j examples/window_metrics.jsonl \
  --filter 'e.metric == "cpu"' \
  --window 5 \
  -e $'let values = window_numbers(window, "value");
if values.len() > 0 {
    let sum = values.reduce(|s, x| s + x, 0.0);
    let avg = sum / values.len();
    e.avg_last_5 = round(avg * 100.0) / 100.0;
    if values.len() >= 3 {
        e.p95_last_5 = round(values.percentile(95.0) * 100.0) / 100.0;
    }
}' \
  -n 5
timestamp='2024-01-15T10:00:00Z' metric='cpu' value=45.2 host='server1' avg_last_5=45.2
timestamp='2024-01-15T10:00:01Z' metric='cpu' value=46.8 host='server1' avg_last_5=46.0
timestamp='2024-01-15T10:00:02Z' metric='cpu' value=44.5 host='server1' avg_last_5=45.5
  p95_last_5=46.64
timestamp='2024-01-15T10:00:03Z' metric='cpu' value=48.1 host='server1' avg_last_5=46.15
  p95_last_5=47.91
timestamp='2024-01-15T10:00:04Z' metric='cpu' value=47.3 host='server1' avg_last_5=46.38
  p95_last_5=47.94

The special window variable becomes available once you pass --window. Use window_numbers(window, FIELD) for numeric arrays and window_values(window, FIELD) for raw strings.

Step 5 – Custom Reports with --end

Sometimes you need a formatted report instead of raw maps. Store a short Rhai script and include it with -I so the same layout works across platforms, then call the helper from --end.

cat <<'RHAI' > metrics_summary.rhai
fn summarize_metrics() {
    let keys = metrics.keys();
    keys.sort();
    for key in keys {
        print(key + ": " + metrics[key].to_string());
    }
}
RHAI

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -e 'track_count(e.level)' \
  -m \
  -I metrics_summary.rhai \
  --end 'summarize_metrics()'

rm metrics_summary.rhai
cat <<'RHAI' > metrics_summary.rhai
fn summarize_metrics() {
    let keys = metrics.keys();
    keys.sort();
    for key in keys {
        print(key + ": " + metrics[key].to_string());
    }
}
RHAI

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -e 'track_count(e.level)' \
  -m \
  -I metrics_summary.rhai \
  --end 'summarize_metrics()'

rm metrics_summary.rhai
kelora: Pipeline error: End stage error: Error in function 'summarize_metrics' in end expression at line 9, position 1: Variable 'metrics' not found in function at line 2, position 16

The automatically printed --metrics block remains, while --end gives you a clean text summary that you can redirect or feed into alerts.

Step 6 – Persist Metrics to Disk

Use --metrics-file to serialize the metrics map as JSON for other tools.

kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -m \
  --metrics-file metrics.json

cat metrics.json
rm metrics.json
kelora -j examples/simple_json.jsonl \
  -F none \
  -e 'track_count(e.service)' \
  -m \
  --metrics-file metrics.json

cat metrics.json
rm metrics.json
kelora: Tracked metrics:
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3
{
  "api": 7,
  "disk": 1,
  "admin": 2,
  "database": 2,
  "cache": 1,
  "monitoring": 1,
  "health": 1,
  "scheduler": 3,
  "auth": 2
}

The JSON structure mirrors the in-memory map, so you can load it with jq, a dashboard agent, or any scripting language.

Step 7 – Streaming Scoreboards

Kelora keeps metrics up to date even when tailing files or processing archives. This command watches a gzipped access log and surfaces top status classes.

kelora -f combined examples/web_access_large.log.gz \
  -e 'let klass = ((e.status / 100) * 100).to_string(); track_count(klass)' \
  -m -F none \
  -n 0
kelora -f combined examples/web_access_large.log.gz \
  -e 'let klass = ((e.status / 100) * 100).to_string(); track_count(klass)' \
  -m -F none \
  -n 0
kelora: Tracked metrics:
400          = 1

Passing --take 0 (or omitting it) keeps processing the entire file. When you run Kelora against a stream (tail -f | kelora ...), the metrics snapshot updates when you terminate the process.

Need full histograms instead of counts? Swap in track_bucket():

kelora -f combined examples/web_access_large.log.gz \
  -m \
  -e 'track_bucket("status_family", (e.status / 100) * 100)' \
  --end '
    let buckets = metrics.status_family.keys();
    buckets.sort();
    for bucket in buckets {
        let counts = metrics.status_family[bucket];
        print(bucket.to_string() + ": " + counts.to_string());
    }
  ' \
  -F none -n 0
kelora -f combined examples/web_access_large.log.gz \
  -m \
  -e 'track_bucket("status_family", (e.status / 100) * 100)' \
  --end '
    let buckets = metrics.status_family.keys();
    buckets.sort();
    for bucket in buckets {
        let counts = metrics.status_family[bucket];
        print(bucket.to_string() + ": " + counts.to_string());
    }
  ' \
  -F none -n 0
400: 1

kelora: Tracked metrics:
status_family = #{"400": 1}

track_bucket(key, bucket_value) keeps nested counters so you can emit a human-readable histogram once processing finishes.

Troubleshooting

  • No metrics printed: Ensure you pass --metrics or consume metrics within an --end script. Tracking functions alone do not emit output.
  • Huge maps: Reset counters between runs by clearing your terminal or using rm metrics.json when exporting to disk. Large cardinality sets from track_unique() are the usual culprit.
  • Operation metadata: Kelora keeps operator hints (the __op_* keys) in the internal tracker now, so user metric maps print cleanly. If you need those hints for custom aggregation, read them from the internal metrics map.
  • Sliding window functions return empty arrays: window_numbers(window, ...) only works after you enable --window and the requested field exists in the buffered events.

Next Steps