Quickstart¶
Get started with Kelora in minutes. This guide shows real examples from parsing to advanced transformations.
Installation¶
Download the latest release from GitHub Releases or install via Cargo:
Get the Examples¶
# Without git
curl -L https://github.com/dloss/kelora/archive/refs/heads/main.zip -o kelora.zip && \
unzip kelora.zip && \
cd kelora-main
Parse Unstructured Logs¶
Turn raw web server logs into structured, queryable data:
ip='52.127.35.227' status=403 method='HEAD' path='/harness/methodologies/unleash/methodologies'
ip='166.86.165.21' status=201 method='PUT' path='/channels/out-of-the-box/implement'
ip='24.83.53.204' status=204 method='PATCH' path='/markets'
ip='37.144.168.216' status=201 method='PATCH' path='/evolve/orchestrate'
ip='24.44.139.136' status=400 method='GET' path='/rich'
The -f combined parses Apache/NGINX access logs into fields. The -k flag selects which fields to display, and -n limits output. Kelora automatically extracts ip, timestamp, method, path, status, user_agent, and more from each line.
Filter and Transform¶
Filter by HTTP status codes and add computed fields:
kelora -f combined examples/simple_combined.log \
--filter 'e.status >= 400' \
-e 'e.error_type = if e.status >= 500 { "server" } else { "client" }' \
-k ip,status,error_type,path -n 5
ip='52.127.35.227' status=403 error_type='client'
path='/harness/methodologies/unleash/methodologies'
ip='24.44.139.136' status=400 error_type='client' path='/rich'
ip='67.19.236.47' status=404 error_type='client' path='/next-generation/drive/turn-key/metrics'
ip='94.224.49.21' status=406 error_type='client' path='/vertical'
ip='152.252.182.35' status=502 error_type='server' path='/experiences/action-items/best-of-breed'
The --filter expression keeps only error responses (4xx and 5xx). The -e flag adds a computed error_type field based on the status code.
Track Metrics¶
Count requests by status code and track response sizes:
Use track_count(), track_sum(), track_min(), and track_max() to collect metrics. The -F none suppresses event output, -m displays metrics at the end.
Convert Between Formats¶
Kelora converts between all supported formats. Some examples:
Convert syslog to JSON:
{"timestamp":"Oct 04 10:27:22","severity":0,"msg":"user=alice action=login detail=\"Authenticated via SSO\" message=\"User login accepted\"","pri":96,"facility":12,"host":"bernier5641","prog":"dolore","pid":7276}
{"timestamp":"Oct 04 10:27:22","severity":3,"msg":"user=service-bot action=restart detail=\"Process restarted automatically\" message=\"Service restarted\"","pri":131,"facility":16,"host":"gusikowski6748","prog":"eaque","pid":4669}
{"timestamp":"Oct 04 10:27:22","severity":4,"msg":"user=pager-duty action=acknowledge detail=\"Acknowledged incident #4242\" message=\"Change request processed\"","pri":132,"facility":16,"host":"gerlach2033","prog":"non","pid":4398}
Convert web logs to CSV:
The -f flag specifies input format, -F specifies output format (we could have used -J as a shortcut for JSON). Gzipped files are automatically decompressed.
Common Patterns¶
# Stream processing (tail -f, kubectl logs, etc.)
kubectl logs -f deployment/api | kelora -f json -l error
# Multiple files - track which files have errors
kelora -f json logs/*.log --metrics \
--exec 'if e.level == "ERROR" { track_count(meta.filename) }'
# Time-based filtering
kelora -f combined access.log --since "1 hour ago" --until "10 minutes ago"
# Extract prefixes (Docker Compose, systemd, etc.)
docker compose logs | kelora --extract-prefix container -f json
# Auto-detect format and output brief values only
kelora -f auto mixed.log -k timestamp,level,message -b
# Custom timestamp formats
kelora -f line app.log --ts-format "%d/%b/%Y:%H:%M:%S" --ts-field timestamp
Get Help¶
kelora --help # Complete CLI reference
kelora --help-examples # More usage patterns
kelora --help-rhai # Rhai scripting guide
kelora --help-functions # All built-in Rhai functions
kelora --help-time # Timestamp format reference
Next Steps¶
- Start with events — Practice accessing and mutating
e.fieldvalues on JSON or logfmt samples. Then branch into How-To: Find Errors in Logs. - Explore parsers — Try
-f json,-f combined, and-f 'cols:...', then dig into the Input Formats reference. - Layer scripts — Combine
--filter,--exec, and--keysfor enrichment. Deepen your skills with the Scripting Transforms tutorial. - Add metrics — Introduce
track_count,track_sum, and--metrics, then read the Metrics & Tracking tutorial. - Tune pipelines — Experiment with multi-stage workflows,
--begin/--end, and configs; the Pipeline Model concept and Configuration System guide explain the moving pieces. - Control output — Swap
-Fformats, use-k/-K, and convert timestamps. Reference the CLI options when you need exact flag behaviour.