Basics: Input, Display & Filtering¶
Master the essential commands for reading logs, controlling display output, and filtering by log level. This tutorial covers the foundation you'll use in every Kelora workflow.
What You'll Learn¶
- Specify input formats with
-fand-j - Control what fields are displayed with
-b,-c,-k, and-K - Filter events by log level with
-land-L - Export data in different formats with
-Fand-J - Combine options for common workflows
About This Tutorial¶
In the Quickstart, you ran three commands to see Kelora in action. Now we'll teach you what each flag means, how they combine, and when to use them. By the end, you'll understand the building blocks for any Kelora workflow.
Prerequisites¶
- Kelora installed and in your PATH
- Basic command-line familiarity
Sample Data¶
Commands below use examples/basics.jsonl — a small JSON-formatted log file with 6 events designed for this tutorial:
{"timestamp":"2024-01-15T10:00:00Z","level":"INFO","service":"api","message":"Application started","version":"1.2.3"}
{"timestamp":"2024-01-15T10:00:10Z","level":"DEBUG","service":"database","message":"Connection pool initialized","max_connections":50}
{"timestamp":"2024-01-15T10:01:00Z","level":"WARN","service":"api","message":"High memory usage detected","memory_percent":85}
{"timestamp":"2024-01-15T10:01:30Z","level":"ERROR","service":"database","message":"Query timeout","query":"SELECT * FROM users","duration_ms":5000}
{"timestamp":"2024-01-15T10:02:00Z","level":"INFO","service":"api","message":"Request received","method":"GET","path":"/api/users"}
{"timestamp":"2024-01-15T10:03:00Z","level":"ERROR","service":"auth","message":"Account locked","username":"admin","attempts":5}
If you cloned the project, run commands from the repository root.
Part 1: Input Formats (-f, -j)¶
By default, Kelora auto-detects your log format by examining the first line. Just point it at your logs:
timestamp='2024-01-15T10:00:00Z' level='INFO' message='Application started' service='api'
version='1.2.3'
timestamp='2024-01-15T10:00:10Z' level='DEBUG' message='Connection pool initialized'
service='database' max_connections=50
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected' service='api'
memory_percent=85
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout' service='database'
query='SELECT * FROM users' duration_ms=5000
timestamp='2024-01-15T10:02:00Z' level='INFO' message='Request received' service='api' method='GET'
path='/api/users'
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked' service='auth'
username='admin' attempts=5
Kelora detects this is JSON and parses the fields automatically.
For scripts and reproducibility, specify the format explicitly:
kelora -f json examples/basics.jsonl # Explicit format
kelora -j examples/basics.jsonl # Shortcut for -f json
This prevents surprises if auto-detection logic changes in future versions.
To override auto-detection and treat structured logs as plain text:
line='{"timestamp":"2024-01-15T10:00:00Z","level":"INFO","service":"api","message":"Application started","version":"1.2.3"}'
line='{"timestamp":"2024-01-15T10:00:10Z","level":"DEBUG","service":"database","message":"Connection pool initialized","max_connections":50}'
line='{"timestamp":"2024-01-15T10:01:00Z","level":"WARN","service":"api","message":"High memory usage detected","memory_percent":85}'
line='{"timestamp":"2024-01-15T10:01:30Z","level":"ERROR","service":"database","message":"Query timeout","query":"SELECT * FROM users","duration_ms":5000}'
line='{"timestamp":"2024-01-15T10:02:00Z","level":"INFO","service":"api","message":"Request received","method":"GET","path":"/api/users"}'
line='{"timestamp":"2024-01-15T10:03:00Z","level":"ERROR","service":"auth","message":"Account locked","username":"admin","attempts":5}'
Supported Formats¶
Kelora auto-detects these formats in this order:
json # JSON objects (or use -j shortcut)
cef # Common Event Format (CEF:...)
syslog # Syslog RFC3164/RFC5424
combined # Apache/Nginx access logs
logfmt # key=value pairs
csv # Comma-separated values (with header)
tsv # Tab-separated values (with header)
line # Plain text (fallback)
To explicitly specify a format, use -f <format>. For example: -f json, -f logfmt, -f csv.
For details on each format (including examples and field mappings), see the Format Reference.
Part 2: Understanding the Default Display¶
Let's examine what Kelora shows by default:
timestamp='2024-01-15T10:00:00Z' level='INFO' message='Application started' service='api'
version='1.2.3'
timestamp='2024-01-15T10:00:10Z' level='DEBUG' message='Connection pool initialized'
service='database' max_connections=50
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected' service='api'
memory_percent=85
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout' service='database'
query='SELECT * FROM users' duration_ms=5000
timestamp='2024-01-15T10:02:00Z' level='INFO' message='Request received' service='api' method='GET'
path='/api/users'
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked' service='auth'
username='admin' attempts=5
The default output format shows:
- Field names and values in
key='value'format - Automatic wrapping - long events wrap with indentation
- Colors (when terminal supports it)
- Smart ordering - timestamp, level, message first, then others alphabetically
Key observations:
- Strings are quoted (
'Application started') - Numbers are not quoted (
max_connections=50) - Intelligent wrapping - When output is too wide for your terminal, Kelora wraps between fields (never in the middle of a field) and indents continuation lines for readability
- Each event is separated by a blank line
- Field names are highlighted in color for better readability
Part 3: Understanding Events¶
Before we dive into display options, let's clarify what an event is and how you'll work with it in filters and scripts.
What is an Event?¶
After Kelora parses a log line, it becomes an event — a structured object (like a map or dictionary) containing fields you can access and manipulate.
Looking at the output above, each block like this is one event:
timestamp='2024-01-15T14:23:45Z' level='INFO' message='Application started'
service='api' version='1.2.3'
The Event Object: e¶
In filter expressions and scripts, you access the current event using the variable e. Each field becomes a property:
e.timestamp // Access the timestamp field
e.level // Access the level field
e.service // Access the service field
e.message // Access the message field
Example: To filter for ERROR events, you write --filter 'e.level == "ERROR"' which means "keep events where the level field equals ERROR."
Example: To check if status code is 500 or higher, you write --filter 'e.status >= 500' which means "keep events where the status field is 500 or more."
Why This Matters¶
Understanding events is crucial because:
- Filtering uses event fields:
--filter 'e.service == "database"' - Scripts read and modify event fields:
--exec 'e.user_type = "admin"' - Display options control which event fields you see:
--keys timestamp,level,message
You'll encounter e throughout the documentation. Remember: e = the current event, and e.field_name = accessing a field in that event.
Want to learn more?
For complete details on event structure, nested fields, and type handling, see Events and Fields.
Part 4: Display Modifiers (-b, -c, -k, -K)¶
Brief Mode (-b) - Values Only¶
Omit field names, show only values for compact output:
2024-01-15T10:00:00Z INFO Application started api 1.2.3
2024-01-15T10:00:10Z DEBUG Connection pool initialized database 50
2024-01-15T10:01:00Z WARN High memory usage detected api 85
2024-01-15T10:01:30Z ERROR Query timeout database SELECT * FROM users 5000
2024-01-15T10:02:00Z INFO Request received api GET /api/users
2024-01-15T10:03:00Z ERROR Account locked auth admin 5
Use -b when: You want compact, grep-friendly output.
Core Fields (-c) - Essentials Only¶
Show only timestamp, level, and message:
timestamp='2024-01-15T10:00:00Z' level='INFO' message='Application started'
timestamp='2024-01-15T10:00:10Z' level='DEBUG' message='Connection pool initialized'
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected'
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout'
timestamp='2024-01-15T10:02:00Z' level='INFO' message='Request received'
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked'
Use -c when: You want to focus on the essentials, hiding extra metadata.
Select Fields (-k) - Choose What to Show¶
Choose exactly which fields to show (and in what order):
level='INFO' service='api' message='Application started'
level='DEBUG' service='database' message='Connection pool initialized'
level='WARN' service='api' message='High memory usage detected'
level='ERROR' service='database' message='Query timeout'
level='INFO' service='api' message='Request received'
level='ERROR' service='auth' message='Account locked'
Pro tip: Fields appear in the order you specify!
Exclude Fields (-K) - Hide Sensitive Data¶
Remove specific fields (like passwords, tokens, or verbose metadata):
timestamp='2024-01-15T10:00:00Z' level='INFO' message='Application started'
timestamp='2024-01-15T10:00:10Z' level='DEBUG' message='Connection pool initialized'
max_connections=50
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected' memory_percent=85
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout' query='SELECT * FROM users'
duration_ms=5000
timestamp='2024-01-15T10:02:00Z' level='INFO' message='Request received' method='GET'
path='/api/users'
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked' username='admin' attempts=5
Use -K when: Hiding sensitive data (passwords, API keys) or reducing noise.
Part 5: Level Filtering (-l, -L)¶
Include Levels (-l) - Show Only Specific Log Levels¶
Filter to show only errors and warnings:
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected' service='api'
memory_percent=85
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout' service='database'
query='SELECT * FROM users' duration_ms=5000
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked' service='auth'
username='admin' attempts=5
Common patterns:
kelora -j app.log -l error # Errors only
kelora -j app.log -l error,warn,critical # Problems only (case-insensitive)
kelora -j app.log -l info # Application flow (skip debug noise)
Exclude Levels (-L) - Hide Debug Noise¶
Remove verbose log levels:
timestamp='2024-01-15T10:01:00Z' level='WARN' message='High memory usage detected' service='api'
memory_percent=85
timestamp='2024-01-15T10:01:30Z' level='ERROR' message='Query timeout' service='database'
query='SELECT * FROM users' duration_ms=5000
timestamp='2024-01-15T10:03:00Z' level='ERROR' message='Account locked' service='auth'
username='admin' attempts=5
Use -L when: You want to exclude chatty debug/trace output.
Part 6: Output Formats (-F, -J)¶
The default key='value' format is great for reading, but sometimes you need machine-readable output.
JSON Output (-F json or -J)¶
{"timestamp":"2024-01-15T10:00:00Z","level":"INFO","message":"Application started","service":"api","version":"1.2.3"}
{"timestamp":"2024-01-15T10:00:10Z","level":"DEBUG","message":"Connection pool initialized","service":"database","max_connections":50}
{"timestamp":"2024-01-15T10:01:00Z","level":"WARN","message":"High memory usage detected","service":"api","memory_percent":85}
{"timestamp":"2024-01-15T10:01:30Z","level":"ERROR","message":"Query timeout","service":"database","query":"SELECT * FROM users","duration_ms":5000}
{"timestamp":"2024-01-15T10:02:00Z","level":"INFO","message":"Request received","service":"api","method":"GET","path":"/api/users"}
{"timestamp":"2024-01-15T10:03:00Z","level":"ERROR","message":"Account locked","service":"auth","username":"admin","attempts":5}
Use JSON when: Piping to jq, saving to file, or integrating with other tools.
CSV Output (-F csv)¶
Perfect for spreadsheet export:
timestamp,level,service,message
2024-01-15T10:00:00Z,INFO,api,Application started
2024-01-15T10:00:10Z,DEBUG,database,Connection pool initialized
2024-01-15T10:01:00Z,WARN,api,High memory usage detected
2024-01-15T10:01:30Z,ERROR,database,Query timeout
2024-01-15T10:02:00Z,INFO,api,Request received
2024-01-15T10:03:00Z,ERROR,auth,Account locked
Use CSV when: Exporting to Excel, Google Sheets, or data analysis tools.
Logfmt Output (-F logfmt)¶
timestamp=2024-01-15T10:00:00Z level=INFO message="Application started" service=api version=1.2.3
timestamp=2024-01-15T10:00:10Z level=DEBUG message="Connection pool initialized" service=database max_connections=50
timestamp=2024-01-15T10:01:00Z level=WARN message="High memory usage detected" service=api memory_percent=85
timestamp=2024-01-15T10:01:30Z level=ERROR message="Query timeout" service=database query="SELECT * FROM users" duration_ms=5000
timestamp=2024-01-15T10:02:00Z level=INFO message="Request received" service=api method=GET path=/api/users
timestamp=2024-01-15T10:03:00Z level=ERROR message="Account locked" service=auth username=admin attempts=5
Use logfmt when: You want parseable output that's also human-readable.
Inspect Output (-F inspect) - Debug with Types¶
---
timestamp | string | "2024-01-15T10:00:00Z"
level | string | "INFO"
message | string | "Application started"
service | string | "api"
version | string | "1.2.3"
---
timestamp | string | "2024-01-15T10:00:10Z"
level | string | "DEBUG"
message | string | "Connection pool initialized"
service | string | "database"
max_connections | int | 50
---
timestamp | string | "2024-01-15T10:01:00Z"
level | string | "WARN"
message | string | "High memory usage detected"
service | string | "api"
memory_percent | int | 85
---
timestamp | string | "2024-01-15T10:01:30Z"
level | string | "ERROR"
message | string | "Query timeout"
service | string | "database"
query | string | "SELECT * FROM users"
duration_ms | int | 5000
---
timestamp | string | "2024-01-15T10:02:00Z"
level | string | "INFO"
message | string | "Request received"
service | string | "api"
method | string | "GET"
path | string | "/api/users"
---
timestamp | string | "2024-01-15T10:03:00Z"
level | string | "ERROR"
message | string | "Account locked"
service | string | "auth"
username | string | "admin"
attempts | int | 5
Use inspect when: Debugging type mismatches or understanding field types.
No Output - Stats Only¶
Detected format: json
Lines processed: 6 total, 0 filtered (0.0%), 0 errors (0.0%)
Events created: 6 total, 6 output, 0 filtered (0.0%)
Throughput: 4668 lines/s in 1ms
Timestamp: timestamp (auto-detected) - 6/6 parsed (100.0%).
Time span: 2024-01-15T10:00:00+00:00 to 2024-01-15T10:03:00+00:00 (3m)
Levels seen: DEBUG,ERROR,INFO,WARN
Keys seen: attempts,duration_ms,level,max_connections,memory_percent,message,method,path,query,service,timestamp,username,version
Use --stats when: You want to analyze log structure without seeing the events.
Part 7: Practical Combinations¶
Exercise 1: Find Errors, Show Essentials¶
Show only errors with just timestamp, service, and message:
timestamp='2024-01-15T10:01:30Z' service='database' message='Query timeout'
timestamp='2024-01-15T10:03:00Z' service='auth' message='Account locked'
Exercise 2: Export Problems to CSV¶
Export warnings and errors to CSV for Excel analysis:
timestamp,level,service,message
2024-01-15T10:01:00Z,WARN,api,High memory usage detected
2024-01-15T10:01:30Z,ERROR,database,Query timeout
2024-01-15T10:03:00Z,ERROR,auth,Account locked
Exercise 3: Compact View Without Debug¶
Brief output excluding debug noise:
2024-01-15T10:00:00Z INFO Application started api 1.2.3
2024-01-15T10:01:00Z WARN High memory usage detected api 85
2024-01-15T10:01:30Z ERROR Query timeout database SELECT * FROM users 5000
2024-01-15T10:02:00Z INFO Request received api GET /api/users
2024-01-15T10:03:00Z ERROR Account locked auth admin 5
Real-World Patterns¶
Here are some patterns you'll use frequently in practice:
# Stream processing (tail -f, kubectl logs, etc.)
kubectl logs -f deployment/api | kelora -f json -l error
# Multiple files - track which files have errors
kelora -f json logs/*.log --metrics \
--exec 'if e.level == "ERROR" { track_count(meta.filename) }'
# Time-based filtering
kelora -f combined access.log --since "1 hour ago" --until "10 minutes ago"
# Extract prefixes (Docker Compose, systemd, etc.)
docker compose logs | kelora --extract-prefix container -f json
# Auto-detect format and output brief values only
kelora -f auto mixed.log -k timestamp,level,message -b
# Custom timestamp formats
kelora -f line app.log --ts-format "%d/%b/%Y:%H:%M:%S" --ts-field timestamp
Quick Reference Cheat Sheet¶
Input Formats¶
-f json # JSON lines (or use -j shortcut)
-f logfmt # key=value format
-f combined # Apache/Nginx access logs
-f syslog # Syslog format
-f csv # CSV with header
-f line # Plain text (default)
-f auto # Auto-detect by content
Display Modifiers¶
-b # Brief: values only, no field names
-c # Core: timestamp + level + message only
-k level,msg # Keys: show only these fields (in this order)
-K password,ip # Exclude: hide these fields
Level Filtering¶
Output Formats¶
-F default # Pretty key='value' with colors (default)
-F json # JSON lines (or use -J shortcut)
-F csv # CSV with header
-F logfmt # Logfmt key=value
-F inspect # Debug with types
Understanding the Pipeline Order¶
Kelora processes your options in this order:
1. Read file (-f json, -j)
2. Filter levels (-l error, -L debug)
3. Select fields (-k, -K, -c)
4. Format output (-F csv, -J, -b)
5. Write output (stdout or -o file)
This means:
-lfilters happen before-k(you can filter on fields you won't see in output)-baffects display, not what gets filtered--statsstill processes everything, just doesn't show events
Common Workflows¶
Error Analysis Pipeline¶
kelora -j app.log -l error -k timestamp,service,message -F csv -o errors.csv
# Filter → Select fields → Export to CSV → Save to file
Quick Scan (Hide Noise)¶
kelora -j app.log -L debug,trace -b --take 20
# Exclude verbose levels → Brief output → First 20 events
Investigation Mode (Full Detail)¶
kelora -j app.log -l warn,error,critical -K password,token
# Show problems → Hide sensitive data → Keep all other fields
Stats-Only Analysis¶
When to Use What¶
| Goal | Use | Example |
|---|---|---|
| Find errors fast | -l error |
kelora -j app.log -l error -c |
| Hide debug spam | -L debug,trace |
kelora -j app.log -L debug |
| Export to Excel | -F csv |
kelora -j app.log -F csv -o report.csv |
| Pipe to jq | -J |
kelora -j app.log -J \| jq '.level' |
| Quick scan | -b --take 20 |
kelora -j app.log -b --take 20 |
| Hide secrets | -K password,token |
kelora -j app.log -K password,apikey |
| See types | -F inspect |
kelora -j app.log -F inspect |
Next Steps¶
You've mastered the basics of input, display, and filtering. Now learn to write scripts for custom logic:
Recommended Next: Introduction to Rhai¶
→ Introduction to Rhai Scripting (20 min) - Learn to write filter expressions and transforms. You'll understand how to use the e object you just learned about, write conditionals, convert types, and build multi-stage pipelines. This is essential before tackling advanced features.
After That: Specialized Topics¶
Pick based on your needs:
- Working with Time (15 min) - Parse timestamps, filter by time ranges, handle timezones
- Metrics and Tracking (20 min) - Aggregate data with
track_*()functions - Parsing Custom Formats (15 min) - Handle non-standard log formats
- Advanced Scripting (30 min) - Complex transformations and window operations
Or Jump to Solutions¶
How-To Guides - Solve specific problems with ready-made solutions