Skip to content

Metrics and Tracking

Turn raw log streams into actionable numbers. This tutorial walks through Kelora's metrics pipeline, from basic counters to custom summaries that you can export or feed into dashboards.

What You'll Learn

  • Track counts, sums, buckets, and unique values with Rhai helpers
  • Combine --metrics, --stats, --begin, and --end for structured reports
  • Use sliding windows and percentiles for latency analysis
  • Persist metrics to disk for downstream processing

Prerequisites

Sample Data

Commands below use fixtures from the repository. If you cloned the project, the paths resolve relative to the docs root:

  • examples/simple_json.jsonl — mixed application logs
  • examples/window_metrics.jsonl — high-frequency metric samples
  • examples/web_access_large.log.gz — compressed access logs for batch jobs

All commands print real output thanks to markdown-exec; feel free to tweak the expressions and rerun them locally.

Step 1 – Quick Counts with track_count()

Count how many events belong to each service while suppressing event output.

kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  --metrics
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3

--metrics prints the aggregated map when processing finishes. Use this pattern any time you want a quick histogram after a batch run.

Showing Stats at the Same Time

Pair --metrics with --stats when you need throughput details as well:

kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -m --stats
kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -m --stats
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3
Detected format: json
Lines processed: 20 total, 0 filtered (0.0%), 0 errors (0.0%)
Events created: 20 total, 20 output, 0 filtered (0.0%)
Throughput: 2793 lines/s in 7ms
Timestamp: timestamp (auto-detected) - 20/20 parsed (100.0%).
Time span: 2024-01-15T10:00:00+00:00 to 2024-01-15T10:30:00+00:00 (30m)
Levels seen: CRITICAL,DEBUG,ERROR,INFO,WARN,high
Keys seen: attempts,channel,config_file,downtime_seconds,duration_ms,endpoints,free_gb,freed_gb,ip,job,key,level,max_connections,memory_percent,message,method,partition,path,query,reason,schedule,script,service,severity,size_mb,status,target,timestamp,ttl,user_id,username,version

--stats adds processing totals, time span, and field inventory without touching your metrics map.

Step 2 – Summaries with Sums and Averages

Kelora ships several helpers for numeric metrics. The following example treats response sizes and latency as rolling aggregates.

kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") {
          track_sum("total_duration", e.duration_ms);
          track_count("duration_count");
          track_min("min_duration", e.duration_ms);
          track_max("max_duration", e.duration_ms)
      }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") { track_sum("total_duration", e.duration_ms); track_count("duration_count"); track_min("min_duration", e.duration_ms); track_max("max_duration", e.duration_ms) }' \
  --metrics
duration_count = 3
max_duration = 300000
min_duration = 45
total_duration = 305045

Available aggregation functions:

  • track_sum(key, value) - Accumulates totals (throughput, volume)
  • track_avg(key, value) - Calculates averages automatically (stores sum and count internally)
  • track_min(key, value) - Tracks minimum value seen
  • track_max(key, value) - Tracks maximum value seen
  • track_count(key) - Counts occurrences of key
  • track_inc(key, amount) - Increment counter by amount (not shown above)

Quick example of track_avg():

# Track average response time automatically
kelora -j api_logs.jsonl -m \
  --exec 'if e.has("duration_ms") { track_avg("avg_latency", e.duration_ms) }'

The track_avg() function internally stores both sum and count, then computes the average during output. This works correctly even in parallel mode.

Step 3 – Histograms with track_bucket()

Build histograms by grouping values into buckets—perfect for latency distributions.

kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") {
          let bucket = (e.duration_ms / 1000) * 1000;
          track_bucket("latency_histogram", bucket)
      }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") { let bucket = (e.duration_ms / 1000) * 1000; track_bucket("latency_histogram", bucket) }' \
  --metrics
latency_histogram = #{"0": 1, "300000": 1, "5000": 1}

track_bucket(key, bucket_value) creates nested counters where each unique bucket value maintains its own count. Perfect for building histograms.

Common bucketing patterns:

// Round to nearest 100ms
track_bucket("latency", (duration_ms / 100) * 100)

// HTTP status code families
track_bucket("status_family", (status / 100) * 100)

// File size buckets (KB)
track_bucket("file_sizes", (bytes / 1024))

// Hour of day
track_bucket("hour_of_day", timestamp.hour())

Step 4 – Top N Rankings with track_top() / track_bottom()

When you need the "top 10 errors" or "5 slowest endpoints" without tracking everything, use track_top() and track_bottom(). These functions maintain bounded, sorted lists—much more memory-efficient than track_bucket() for high-cardinality data.

Frequency Rankings (Count Mode)

Track the most/least frequent items:

kelora -j examples/simple_json.jsonl \
  -e 'if e.level == "error" { track_top("top_errors", e.message, 5) }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'if e.level == "error" { track_top("top_errors", e.message, 5) }' \
  --metrics

Each entry shows the item key and its occurrence count. Results are sorted by count (descending), then alphabetically.

Value-Based Rankings (Weighted Mode)

Track items by custom values like latency, bytes, or CPU time:

kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") {
          track_top("slowest", e.service, 3, e.duration_ms);
          track_bottom("fastest", e.service, 3, e.duration_ms)
      }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'if e.has("duration_ms") { track_top("slowest", e.service, 3, e.duration_ms); track_bottom("fastest", e.service, 3, e.duration_ms) }' \
  --metrics
fastest      (3 items):
  #1  api                            45.00
  #2  database                       5000.00
  #3  scheduler                      300000.00
slowest      (3 items):
  #1  scheduler                      300000.00
  #2  database                       5000.00
  #3  api                            45.00

In weighted mode: - track_top() keeps the N items with highest values (slowest, largest, etc.) - track_bottom() keeps the N items with lowest values (fastest, smallest, etc.) - For each item, the maximum (top) or minimum (bottom) value seen is retained

When to use top/bottom vs bucket:

Scenario Use This Why
"Top 10 error messages" track_top() Bounded memory, auto-sorted
"Error count by type" (low cardinality) track_bucket() Tracks all types
"Latency distribution 0-1000ms" track_bucket() Need full histogram
"10 slowest API calls" track_top() Only care about extremes
Millions of unique IPs track_top() Bucket would exhaust memory

Step 5 – Unique Values and Cardinality

track_unique() stores distinct values for a key—handy for unique user counts or cardinality analysis.

kelora -j examples/simple_json.jsonl \
  -e 'track_unique("services", e.service)' \
  -e 'if e.level == "ERROR" { track_unique("error_messages", e.message) }' \
  --metrics
kelora -j examples/simple_json.jsonl \
  -e 'track_unique("services", e.service)' \
  -e 'if e.level == "ERROR" { track_unique("error_messages", e.message) }' \
  --metrics
error_messages (3 unique):
  Query timeout
  Account locked
  Service unavailable
services     (9 unique):
  api
  database
  cache
  auth
  scheduler
  disk
  monitoring
  admin
  health

Use metrics["services"].len() later to compute the number of distinct members.

Viewing Metrics in Different Formats

By default, -m shows all tracked items in table format. For large collections, you can use --metrics=short to see just the first 5 items, or --metrics=json for structured JSON output:

kelora -j examples/simple_json.jsonl \
  -e 'track_unique("services", e.service)' \
  -m
kelora -j examples/simple_json.jsonl \
  -e 'track_unique("services", e.service)' \
  --metrics=short
kelora -j examples/simple_json.jsonl \
  -e 'track_unique("services", e.service)' \
  --metrics=json

The -m flag defaults to full table format showing all items. Use --metrics=short for abbreviated output (first 5 items with a hint), or --metrics=json for structured JSON to stdout. You can also combine -m with --metrics-file to get both table output and a JSON file.

Step 6 – Sliding Windows and Percentiles

Enable the window buffer to examine recent events. The example below tracks a five-event moving average and P95 latency for CPU metrics.

kelora -j examples/window_metrics.jsonl \
  --filter 'e.metric == "cpu"' \
  --window 5 \
  -e $'let values = window.pluck_as_nums("value");
if values.len() > 0 {
    let sum = values.reduce(|s, x| s + x, 0.0);
    let avg = sum / values.len();
    e.avg_last_5 = round(avg * 100.0) / 100.0;
    if values.len() >= 3 {
        e.p95_last_5 = round(values.percentile(95.0) * 100.0) / 100.0;
    }
}' \
  -n 5
kelora -j examples/window_metrics.jsonl \
  --filter 'e.metric == "cpu"' \
  --window 5 \
  -e $'let values = window.pluck_as_nums("value");
if values.len() > 0 {
    let sum = values.reduce(|s, x| s + x, 0.0);
    let avg = sum / values.len();
    e.avg_last_5 = round(avg * 100.0) / 100.0;
    if values.len() >= 3 {
        e.p95_last_5 = round(values.percentile(95.0) * 100.0) / 100.0;
    }
}' \
  -n 5
timestamp='2024-01-15T10:00:00Z' metric='cpu' value=45.2 host='server1' avg_last_5=45.2
timestamp='2024-01-15T10:00:01Z' metric='cpu' value=46.8 host='server1' avg_last_5=46.0
timestamp='2024-01-15T10:00:02Z' metric='cpu' value=44.5 host='server1' avg_last_5=45.5
  p95_last_5=46.64
timestamp='2024-01-15T10:00:03Z' metric='cpu' value=48.1 host='server1' avg_last_5=46.15
  p95_last_5=47.91
timestamp='2024-01-15T10:00:04Z' metric='cpu' value=47.3 host='server1' avg_last_5=46.38
  p95_last_5=47.94

The special window variable becomes available once you pass --window. Use window.pluck_as_nums("FIELD") for numeric arrays and window.pluck("FIELD") for raw values.

Step 7 – Custom Reports with --end

Sometimes you need a formatted report instead of raw maps. Store a short Rhai script and include it with -I so the same layout works across platforms, then call the helper from --end.

cat <<'RHAI' > metrics_summary.rhai
fn summarize_metrics() {
    let keys = metrics.keys();
    keys.sort();
    for key in keys {
        print(key + ": " + metrics[key].to_string());
    }
}
RHAI

kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -e 'track_count(e.level)' \
  -m \
  -I metrics_summary.rhai \
  --end 'summarize_metrics()'

rm metrics_summary.rhai
cat <<'RHAI' > metrics_summary.rhai
fn summarize_metrics() {
    let keys = metrics.keys();
    keys.sort();
    for key in keys {
        print(key + ": " + metrics[key].to_string());
    }
}
RHAI

kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -e 'track_count(e.level)' \
  -m \
  -I metrics_summary.rhai \
  --end 'summarize_metrics()'

rm metrics_summary.rhai
kelora: Pipeline error: End stage error: end error
  At 9:1 in end expression
  9 | summarize_metrics()
    | ^
  Rhai: Variable not found: metrics (line 2, position 16)
in call to function 'summarize_metrics' (line 9, position 1)
  Call stack (most recent first):
    • summarize_metrics @ line 9, position 1

The automatically printed --metrics block remains, while --end gives you a clean text summary that you can redirect or feed into alerts.

Step 8 – Persist Metrics to Disk

Use --metrics-file to serialize the metrics map as JSON for other tools.

kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -m \
  --metrics-file metrics.json

cat metrics.json
rm metrics.json
kelora -j examples/simple_json.jsonl \
  -e 'track_count(e.service)' \
  -m \
  --metrics-file metrics.json

cat metrics.json
rm metrics.json
admin        = 2
api          = 7
auth         = 2
cache        = 1
database     = 2
disk         = 1
health       = 1
monitoring   = 1
scheduler    = 3
{
  "api": 7,
  "disk": 1,
  "auth": 2,
  "health": 1,
  "monitoring": 1,
  "database": 2,
  "cache": 1,
  "admin": 2,
  "scheduler": 3
}

The JSON structure mirrors the in-memory map, so you can load it with jq, a dashboard agent, or any scripting language.

Step 9 – Streaming Scoreboards

Kelora keeps metrics up to date even when tailing files or processing archives. This command watches a gzipped access log and surfaces top status classes.

kelora -f combined examples/web_access_large.log.gz \
  -e 'let klass = ((e.status / 100) * 100).to_string(); track_count(klass)' \
  -m \
  -n 0
kelora -f combined examples/web_access_large.log.gz \
  -e 'let klass = ((e.status / 100) * 100).to_string(); track_count(klass)' \
  -m \
  -n 0
400          = 1

Passing --take 0 (or omitting it) keeps processing the entire file. When you run Kelora against a stream (tail -f | kelora ...), the metrics snapshot updates when you terminate the process.

Need full histograms instead of counts? Swap in track_bucket():

kelora -f combined examples/web_access_large.log.gz \
  -m \
  -e 'track_bucket("status_family", (e.status / 100) * 100)' \
  --end '
    let buckets = metrics.status_family.keys();
    buckets.sort();
    for bucket in buckets {
        let counts = metrics.status_family[bucket];
        print(bucket.to_string() + ": " + counts.to_string());
    }
  ' \
  -n 0
kelora -f combined examples/web_access_large.log.gz \
  -m \
  -e 'track_bucket("status_family", (e.status / 100) * 100)' \
  --end '
    let buckets = metrics.status_family.keys();
    buckets.sort();
    for bucket in buckets {
        let counts = metrics.status_family[bucket];
        print(bucket.to_string() + ": " + counts.to_string());
    }
  ' \
  -n 0
status_family = #{"400": 1}

track_bucket(key, bucket_value) keeps nested counters so you can emit a human-readable histogram once processing finishes.

Troubleshooting

  • No metrics printed: Ensure you pass -m (or --metrics) or consume metrics within an --end script. Tracking functions alone do not emit output.

  • Truncated arrays: If -m shows only the first 5 items with a hint, use --metrics=full for full table output, --metrics=json for JSON format, or --metrics-file to write JSON to disk.

  • Huge maps: Reset counters between runs by clearing your terminal or using rm metrics.json when exporting to disk. Large cardinality sets from track_unique() are the usual culprit.

  • Operation metadata: Kelora keeps operator hints (the __op_* keys) in the internal tracker now, so user metric maps print cleanly. If you need those hints for custom aggregation, read them from the internal metrics map.

  • Sliding window functions return empty arrays: window.pluck_as_nums("field") only works after you enable --window and the requested field exists in the buffered events.

Quick Reference: All Tracking Functions

Function Purpose Example
track_count(key) Count events by key track_count(e.service)
track_inc(key, amount) Increment by amount track_inc("total_bytes", e.size)
track_sum(key, value) Sum numeric values track_sum("bandwidth", e.bytes)
track_avg(key, value) Average numeric values track_avg("avg_latency", e.duration)
track_min(key, value) Minimum value track_min("fastest", e.duration)
track_max(key, value) Maximum value track_max("slowest", e.duration)
track_bucket(key, bucket) Histogram buckets track_bucket("status", (e.status/100)*100)
track_unique(key, value) Unique values track_unique("users", e.user_id)

Note: track_avg() automatically computes averages by storing sum and count internally. For manual calculation, you can still use sum / count in the --end stage.

Summary

You've learned:

  • ✅ Track counts with track_count() and increment with track_inc()
  • ✅ Aggregate numbers with track_sum(), track_avg(), track_min(), track_max()
  • ✅ Build histograms with track_bucket()
  • ✅ Rank items with track_top() and track_bottom()
  • ✅ Count unique values with track_unique()
  • ✅ View metrics with -m, --metrics=full, and --metrics=json
  • ✅ Persist metrics with --metrics-file
  • ✅ Generate custom reports in --end stage
  • ✅ Use sliding windows for percentile analysis

Next Steps

Now that you can track and aggregate data, continue to:

Related guides: