This is a local planning document for upcoming features and improvements.
Core Features Complete:
- Lazy file reading with indexed line positions
- Live filtering with background processing
- File watching and auto-reload
- Follow mode (tail -f style)
- Filter history with arrow key navigation
- ANSI color support
- Vim-style line jumping (
:123) - Vim-style z commands (zz, zt, zb)
- Mouse scroll support
- Help overlay (
?key) - Event-based architecture
v0.2.0 Features:
- Multi-tab support with side panel UI
- Stdin support (
cmd | lazytail) - Multiple file arguments (
lazytail a.log b.log) - Per-tab state (filter, scroll, follow mode)
- Tab navigation (Tab, Shift+Tab, 1-9)
- AUR package available
v0.3.0 Features:
- Regex filter mode (Tab to toggle)
- Case sensitivity toggle (Alt+C)
- Filter history with mode persistence
- Expandable log entries (Space to toggle, c to collapse)
- Persistent filter history to disk
- Stats panel (line counts)
- Filter progress percentage display
- Streaming filter with SIMD search (memmem) for better performance
- Grep-style search for case-sensitive patterns
v0.4.0 Features:
- Source discovery mode (
lazytailwith no args) - Source capture mode (
cmd | lazytail -n "Name") - Active/ended status indicators for discovered sources
- Directory watcher for dynamic tab creation
- Close tab with confirmation dialog (
x/Ctrl+W) - MCP server support (
lazytail --mcp) - MCP tools:
list_sources,get_lines,get_tail,search,get_context - Streaming filter optimization for MCP (grep-like performance on 5GB+ files)
v0.5.0 Features:
- Config system with
lazytail.yamldiscovery (walk parent directories) lazytail initandlazytail config {validate,show}subcommands- Project-scoped and global source definitions in config
- Query language:
json | field == "value"syntax in filter input - MCP query language integration (JSON and text syntax converge on shared AST)
- MCP plain text output format (default) to reduce JSON escaping overhead
- Display file path in header and
yto copy source path - Project-local data directories (
.lazytail/)
v0.6.0 Features:
- Columnar index system with severity detection
- Index-accelerated filtering with bitmap pre-filtering
- Severity-based line coloring (ERROR/WARN/INFO/DEBUG)
- Severity histogram in stats panel
- Line count and file size per source in side panel
- MCP
get_statstool with index metadata and severity breakdown - Incremental index building during capture mode
- O(1) line access via mmap-backed columnar offsets
v0.7.0 Features:
- Self-update (
lazytail update) with package manager detection
Post-v0.7.0 (unreleased):
- Scrollable help overlay with j/k navigation
- MCP
get_tailsince_lineparameter for incremental polling - Copy selected line to clipboard (
y) with OSC 52, ANSI stripping, status bar feedback - Mouse click: click sources in side panel to switch tabs, click log lines to select, click category headers to expand/collapse
Goal: View multiple log files in tabs within single UI instance
Status: Complete (v0.2.0)
lazytail api.log worker.log db.log
# Opens UI with side panel showing all sourcesUI Layout:
┌──────────────┬──────────────────────────────────────────────────────┐
│ Sources │ [log content] │
│──────────────│ │
│ > api.log │ 10:00:01 INFO Starting server... │
│ worker.log │ 10:00:02 DEBUG Connected to DB │
│ db.log │ 10:00:03 INFO Listening on :8080 │
│ │ 10:00:04 ERROR Connection refused │
│──────────────│ 10:00:05 INFO GET /health 200 │
│ Severity │ │
│──────────────│ │
│ ○ FATAL 0 │ │
│ ● ERROR 12 │ │
│ ○ WARN 45 │ │
│ ○ INFO 892 │ │
│ ○ DEBUG 45 │ │
│──────────────│──────────────────────────────────────────────────────│
│ [Bookmarks] │ Filter: _ Showing 12/1183 ⟳ 45% │
└──────────────┴──────────────────────────────────────────────────────┘
Status bar (right-aligned indicators):
- "Showing X/Y" - filtered count / total count
- "⟳ 45%" - filter processing progress (hidden when idle)
- "●" - follow mode active indicator
Two-panel layout:
- Left: Source list, severity filter, bookmarks (future)
- Right: Log content + filter input
Side Panel Design:
- Left panel shows all available sources
- Tree structure ready for future organization (folders, groups)
- Active source highlighted with
> - Shows indicators:
*for unsaved filter,●for active/live source - Panel can be toggled hidden/visible (e.g.,
Ctrl+B) - Future: Bookmarks section at bottom for project-scoped quick access
Tasks:
- Multi-tab state management
- Add
Vec<TabState>to App (selection, filter, scroll, follow mode per tab) - Track active tab index
- Refactor single-file state into
TabStatestruct
- Add
- Side panel UI component
- Render source list on left
- Highlight active source
- Show status indicators (active/ended, filter active, follow mode)
- Toggle panel visibility keybinding
- Configurable panel width
- Tab navigation keybindings
-
Tab/Shift+Tabto cycle sources -
1-9for direct source access - Arrow keys to navigate panel when focused
- Show keybindings in help overlay
-
- File watching for multiple files
- Watch all open files simultaneously
- Update correct tab on file change
- CLI argument handling
- Accept multiple file paths
- Validate all files exist before starting
- Backward compatibility
- Single file still works:
lazytail file.log
- Single file still works:
- Add tests for multi-tab behavior
Future Side Panel Enhancements:
- Show total line count and file size per source in side panel (live-updating as file grows) — ✅ v0.6.0
- Fix: selected empty/ended source is invisible — grayed-out dim text has no visible selection highlight, making it unclear which tab is active
- Tree structure with collapsible groups
- Drag-and-drop reordering
- Bookmarks section (per UI instance / project scope)
- Save frequently used file combinations
- Quick switch between "projects"
- Persist bookmarks to config file
- Search/filter within source list
Use Cases:
# Compare multiple services
lazytail api.log worker.log scheduler.log
# System logs
lazytail /var/log/syslog /var/log/auth.log
# Multiple container logs (pre-captured)
lazytail pod1.log pod2.log pod3.logGoal: Auto-discover log sources from config directory
Status: Complete
lazytail # No args → discover sources from ~/.config/lazytail/data/
lazytail api.log # Explicit file → single tab (backward compatible)Directory Structure:
~/.config/lazytail/
├── data/ # Log files (auto-discovered)
│ ├── API.log
│ ├── Worker.log
│ └── DB.log
└── sources/ # Active source markers
├── API # Contains PID, indicates source is live
└── Worker
Tasks:
- Config directory setup
- Create
~/.config/lazytail/data/on first run - Create
~/.config/lazytail/sources/on first run
- Create
- Source discovery (UI mode)
- Scan
data/directory for.logfiles - Check
sources/for active markers (file exists + PID valid) - Display discovered sources as tabs
- Show active/ended status indicator per tab
- Scan
- Watch for new sources
- Monitor
data/directory for new files - Add new tabs dynamically when sources appear
- Monitor
- Tab management
- Close tab keybinding (
xorCtrl+W) with confirmation dialog - Delete ended source files on close (after confirmation)
- Close tab keybinding (
- Add tests for discovery behavior
Behavior:
lazytail(no args) → discover mode, show all sources from config dirlazytail file.log→ explicit mode, show only that filelazytail file1.log file2.log→ explicit mode, show those files
Goal: Capture stdin to named source, viewable in UI
Status: Complete
# Capture logs from any command
cmd | lazytail -n "API"
lazytail -n "API" <(kubectl logs -f pod)
# Works like:
# cmd | tee ~/.config/lazytail/data/API.log
# + register in sources/ + collision check + headerTasks:
- CLI argument parsing
-
-n <name>flag for source mode - Detect stdin input
-
- Source mode implementation
- Name collision detection (check marker + PID validity)
- Create marker file in
sources/with PID - Print header:
Serving "API" → ~/.config/lazytail/data/API.log - Read stdin line by line
- Write to log file (append)
- Echo to stdout (tee behavior)
- On EOF: remove marker, exit (file persists)
- Signal handling
- Handle SIGINT/SIGTERM gracefully
- Clean up marker file on exit
- Error handling
- Exit with error if name collision
- Handle write errors gracefully
- Add tests for source mode
Full Workflow:
# Terminal 1: Capture API logs
kubectl logs -f api-pod | lazytail -n "API"
# Terminal 2: Capture worker logs
kubectl logs -f worker-pod | lazytail -n "Worker"
# Terminal 3: View everything
lazytail
# Shows tabs: [API] [Worker]
# API marked as "active", Worker marked as "active"
# Kill Terminal 1
# UI shows: API now marked as "ended", history still available-
lazytail -nshould truncate (reset) existing log file by default instead of appending- Current behavior: appends to existing log file, accumulating stale data across runs
- New default: truncate the file on start so each capture session begins fresh
- Add
--append/-aflag to preserve existing contents (opt-in)
- Session ID for capture runs
- Each
lazytail -ninvocation generates a unique session ID (e.g., UUID or timestamp-based) - Write a session boundary marker to the log file on start (e.g.,
--- session: abc123 started at 2026-02-23T10:00:00 ---) - Store session ID in the marker file alongside PID
- Store session ID in index metadata or as a checkpoint annotation
- Expose in
list_sourcesresponse so MCP consumers can see the current session - Enable filtering by session:
session == "abc123"orsession == "latest"in query language - Use case: when logs accumulate across multiple runs, users can filter to just the current run without clearing old data
- Pairs well with the truncate-on-start option above (session ID works when you want to keep history)
- Each
-
--file <path>for custom log file location -
--max-size <size>for log rotation - Memory-only mode with streaming (no file)
- Merged chronological view across sources
- Filter across all tabs simultaneously
Goal: Unified pipeline-based query language for filtering, time ranges, and aggregation - with dual input formats (text for UI, JSON for MCP/LLMs)
Architecture:
┌─────────────────────┐ ┌─────────────────────┐
│ Text Query (UI) │ │ JSON Query (MCP) │
│ │ │ │
│ json | level=="err" │ │ {"parser":"json", │
│ │ │ "filters":[...]} │
└──────────┬──────────┘ └──────────┬──────────┘
│ parse │ deserialize
▼ ▼
┌────────────────────────────────────┐
│ FilterQuery (AST) │
└──────────────────┬─────────────────┘
│ execute
▼
┌─────────────┐
│ Results │
└─────────────┘
Key Insight: MCP tool parameters ARE the query language for LLMs. Design rich structured JSON parameters that compile to the same AST as text queries.
Text Syntax (for humans):
# Field filtering
json | level == "error" | service =~ "api|worker"
# Exclusion (critical for noisy logs)
json | level == "error" | msg !~ "kscreen|systemd"
# Time filtering
json | time > "2024-01-28T10:00:00" | time < "2024-01-28T11:00:00"
# Aggregation
json | level == "error" | count by (service)
json | count by (level) | top 10JSON Syntax (for MCP/LLMs):
{
"parser": "json",
"filters": [
{"field": "level", "op": "==", "value": "error"},
{"field": "service", "op": "=~", "value": "api|worker"}
],
"exclude": [
{"field": "msg", "pattern": "kscreen|systemd"}
],
"time_range": {
"field": "timestamp",
"after": "2024-01-28T10:00:00",
"before": "2024-01-28T11:00:00"
},
"aggregate": {
"count_by": "service",
"limit": 10
}
}Pipeline Stages:
| Stage | Text Syntax | JSON Field | Description |
|---|---|---|---|
| Parser | json, logfmt, pattern "..." |
parser |
Extract fields from line |
| Filter | field == "value" |
filters[] |
Include matching lines |
| Exclude | field !~ "pattern" |
exclude[] |
Remove matching lines |
| Time | time > "..." |
time_range |
Filter by timestamp |
| Aggregate | count by (field) |
aggregate |
Group and count |
| Limit | top N |
aggregate.limit |
Limit results |
Operators:
| Operator | Description | Example |
|---|---|---|
==, != |
Equality | level == "error" |
=~, !~ |
Regex match/exclude | msg !~ "kscreen" |
>, <, >=, <= |
Comparison (numeric/time) | status >= 500 |
contains |
Substring match | msg contains "timeout" |
FilterQuery AST (Rust):
struct FilterQuery {
parser: Parser, // json, logfmt, pattern, raw
filters: Vec<FieldFilter>, // field op value
exclude: Vec<ExcludePattern>, // negative filters
time_range: Option<TimeRange>, // after/before timestamps
aggregate: Option<Aggregation>, // count_by, limit
}
enum Parser {
Raw, // plain text (default)
Json, // parse as JSON
Logfmt, // parse key=value
Pattern(String), // extract via pattern
}
struct FieldFilter {
field: String, // e.g., "level" or "user.id"
op: Operator, // ==, !=, =~, !~, >, <, etc.
value: Value, // string, number, regex
}
struct Aggregation {
count_by: Option<String>, // group by field
limit: Option<usize>, // top N
}Implementation Order (MCP-first):
- Define AST structs with serde derives — ✅ v0.5.0
- Build executor that processes FilterQuery — ✅ v0.5.0
- JSON deserialization → MCP tools work immediately — ✅ v0.5.0
- Text parser → UI gets query language later — ✅ v0.5.0
Tasks:
- Phase 1: Core AST & JSON Interface (MCP) — ✅ v0.5.0
- Define
FilterQueryand related structs with#[derive(Deserialize)] - Implement executor for basic filters (
==,!=,=~,!~) - JSON parser support (serde_json field extraction)
- Wire up to MCP
searchtool asqueryparameter - Tests with JSON input
- Define
- Phase 2: Exclusion & Time Filtering — ✅ v0.5.0 (partial)
- Implement exclude patterns (critical for noisy logs!)
- Timestamp field detection (common field names)
- Time range filtering (after/before)
- Tests for exclusion filtering
- Phase 3: Aggregation — ✅
count_bycomplete- Implement
count by (field)— e.g.json | level == "error" | count by (service)✅ - Implement
top N/ limit ✅ - Multiple group_by fields — e.g.
count by (service, level)✅ - Return aggregation results as structured JSON (field → count map) ✅
- Wire into MCP: extend
searchresponse withaggregatein query ✅ - Wire into text query parser (
count by (fields) | top N) ✅ - TUI aggregation view with j/k navigation and drill-down ✅
- Implement
- Phase 3b: Additional Aggregation Types
-
avg(field) by (fields)— average of numeric field grouped by others (e.g.json | avg(latency) by (service)) -
sum(field) by (fields)— total of numeric field (e.g.json | sum(processed) by (service)) -
min(field) by (fields)/max(field) by (fields)— extremes with drill-down to actual line -
p50(field)/p90(field)/p99(field) by (fields)— percentiles (e.g.json | p99(latency) by (service)) -
rate(interval)— count per time window (e.g.json | level == "error" | rate(1m)) — requires timestamp parsing -
count_distinct(field) by (fields)— unique value count (e.g.json | count_distinct(user.id) by (service)) -
histogram(field, bucket_size)— bucket numeric field into ranges (e.g.json | histogram(latency, 100))
-
- Phase 4: Text Parser (UI) — ✅ v0.5.0
- Lexer for text query syntax
- Recursive descent parser → AST
- Error messages with position info
- UI integration (filter input mode)
- Phase 5: Advanced Parsers — ✅ v0.5.0
-
logfmtparser (key=value) -
patternparser (extract fields via template) - Nested field access (
user.id,request.headers.host)
-
- Phase 6: Polish
- Autocomplete for field names in filter input (sample lines from current source, extract field names, offer completions after
|or on Tab) - Syntax highlighting in filter input
- LogQL
formatstage — render structured fields into a custom display template- Text syntax:
json | format <severity> - <method> <url> - <status> - JSON syntax:
"format": "<severity> - <method> <url> - <status>" - Extracts fields from parsed log line and interpolates into template
- Unresolved fields render as empty or
<missing> - Useful in TUI for readable views of dense JSON/logfmt lines
- Useful in MCP for agents requesting specific field projections
- Text syntax:
- Arrow up/down in filter input replaces current text with history entry — typed text is lost with no way to recover it. Should save current input as a draft so the user can arrow back down to restore it.
- Query history with mode
- Filter history prefix matching (zsh-style)
- When text is already typed in the filter input,
Up/Downonly cycles through history entries that match the current input as a prefix - E.g. type
errthen pressUp→ jumps to last history entry starting witherr, skipping unrelated entries - Empty input = normal full history navigation (current behavior, unchanged)
- Works across all filter modes (Plain, Regex, Query)
- When text is already typed in the filter input,
- Documentation and examples
- Autocomplete for field names in filter input (sample lines from current source, extract field names, offer completions after
Goal: Add regex filtering and case sensitivity with intuitive mode switching
UX Design:
┌─────────────────────────────────────────────────────────────┐
│ Plain text mode (default): │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Filter: error [Tab: Regex] │ │
│ └─────────────────────────────────────────────────────────┘ │
│ Frame color: default (e.g., white/gray) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Regex mode: │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Regex: error|warn|fatal [Tab: Plain] │ │
│ └─────────────────────────────────────────────────────────┘ │
│ Frame color: distinct (e.g., cyan/magenta) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Invalid regex (visual feedback): │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Regex: error[ [Tab: Plain] │ │
│ └─────────────────────────────────────────────────────────┘ │
│ Frame color: red (indicates error) │
└─────────────────────────────────────────────────────────────┘
Behavior:
Tabwhile in filter input: toggles between plain text and regex mode- Filter panel frame color changes to indicate current mode
- Invalid regex: frame turns red, filter not applied until valid
- Reopening filter (
/) restores last used mode - History stores mode per entry, navigating history switches mode automatically
- Case sensitivity toggle available in both modes
Filter Mode States:
FilterMode {
Plain { case_sensitive: bool },
Regex { case_sensitive: bool },
}
History Entry:
FilterHistoryEntry {
pattern: String,
mode: FilterMode,
}
Keybindings (while in filter input):
Tab- Toggle between Plain/Regex modeCtrl+I- Toggle case sensitivityUp/Down- Navigate history (mode switches automatically)Enter- Apply filterEsc- Cancel
Visual Indicators:
| Mode | Frame Color | Label |
|---|---|---|
| Plain (case-insensitive) | Default | Filter: |
| Plain (case-sensitive) | Default | Filter [Aa]: |
| Regex (case-insensitive) | Cyan | Regex: |
| Regex (case-sensitive) | Cyan | Regex [Aa]: |
| Regex (invalid) | Red | Regex: |
Tasks:
- Filter mode enum and state
- Create
FilterModeenum (Plain, Regex) - Add case_sensitive flag to each mode
- Store current mode in App/Tab state
- Persist mode when closing filter input
- Create
- Filter input UI changes
- Tab key toggles mode while in filter input
- Different frame colors per mode
- Show mode indicator in prompt (Filter: vs Regex:)
- Show case sensitivity indicator [Aa]
- Red frame for invalid regex
- History with mode support
- Update FilterHistoryEntry to include mode
- When navigating history, switch to stored mode
- Display history entries with mode indicator
- Regex validation
- Validate regex on each keystroke
- Show visual error state (red frame)
- Don't apply filter until regex is valid
- Show error message in status bar (optional)
- Case sensitivity
- Alt+C toggles case sensitivity
- Update StringFilter to respect flag
- Update RegexFilter to respect flag (regex::RegexBuilder)
- Integration
- Wire up to existing FilterEngine
- Ensure background filtering works with both modes
- Handle mode in filter re-application on file change
- Tests
- Unit tests for mode switching
- Tests for history mode restoration
- Tests for regex validation
- Tests for case sensitivity
- Documentation
- Update help overlay with new keybindings
- Update README
Current Status: ✅ Complete
Goal: Add Query as a first-class filter mode alongside Plain and Regex, instead of auto-detecting query syntax
Problem: Query mode is currently triggered by heuristic (is_query_syntax) — if the input looks like json | ..., it's silently treated as a query. This means:
- Searching for a literal string like
json | somethingis impossible in Plain mode - The mode switch is invisible to the user (no explicit trigger)
- No way to force query mode for edge-case inputs that don't pass the heuristic
Proposed UX:
Plain → Regex → Query (Tab cycles through all three)
┌─────────────────────────────────────────────────────────────┐
│ Query mode: │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Query: json | level == "error" [Tab: Plain] │ │
│ └─────────────────────────────────────────────────────────┘ │
│ Frame color: magenta (already used for query today) │
└─────────────────────────────────────────────────────────────┘
Behavior:
Tabcycles: Plain → Regex → Query → Plain- In Plain and Regex modes, input is never interpreted as a query —
json | somethingis a literal search - Query mode always routes through
QueryFilter, no heuristic needed - Auto-detect (
is_query_syntax) can be removed or kept only as an optional hint (e.g. auto-switch offer) - History entries store mode, so query history restores Query mode
Tasks:
- Add
Queryvariant toFilterModeenum - Update
Tabkey to cycle Plain → Regex → Query - Remove heuristic auto-detection from filter dispatch path
- Update filter prompt label and frame color for Query mode
- Update help text and help overlay
- Update filter history serialization for new mode variant
- Add tests for mode cycling and query mode dispatch
Goal: Open/expand log entries to view full content (long lines, JSON properties)
Status: Implemented - Space to toggle, 'c' to collapse all
Use Cases:
- View truncated long lines in full
- Pretty-print JSON log entries
- Inspect multi-line stack traces
- Copy full content of a log entry
UI Behavior:
Normal view (collapsed):
┌─────────────────────────────────────────────────────────┐
│ 142 2024-01-20 10:00:01 {"level":"error","msg":"Fai...│
│ 143 2024-01-20 10:00:02 Starting worker process │
│ 144 2024-01-20 10:00:03 Connection established │
└─────────────────────────────────────────────────────────┘
Expanded view (press Enter or 'o' on line 142):
┌─────────────────────────────────────────────────────────┐
│ 142 2024-01-20 10:00:01 {"level":"error","msg":"Fai...│
│ ┌─────────────────────────────────────────────────────┐ │
│ │ { │ │
│ │ "level": "error", │ │
│ │ "msg": "Failed to connect to database", │ │
│ │ "error": "connection refused", │ │
│ │ "host": "db.example.com", │ │
│ │ "port": 5432, │ │
│ │ "retry_count": 3 │ │
│ │ } │ │
│ └─────────────────────────────────────────────────────┘ │
│ 143 2024-01-20 10:00:02 Starting worker process │
│ 144 2024-01-20 10:00:03 Connection established │
└─────────────────────────────────────────────────────────┘
Raw expanded view (for non-JSON long lines):
┌─────────────────────────────────────────────────────────┐
│ 142 2024-01-20 10:00:01 Very long log message that ...│
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Very long log message that contains a lot of │ │
│ │ information and spans multiple lines when fully │ │
│ │ displayed without truncation so you can read the │ │
│ │ entire content of the log entry. │ │
│ └─────────────────────────────────────────────────────┘ │
│ 143 2024-01-20 10:00:02 Starting worker process │
└─────────────────────────────────────────────────────────┘
Tasks:
- Expand/collapse single entry
- Keybinding:
Spaceto toggle expand - Word-wrap long lines in expanded view
- Visual background to distinguish expanded content
- Keybinding:
- JSON detection and formatting
- Auto-detect JSON content in log line
- Pretty-print with indentation
- Syntax highlighting for JSON (keys, values, types)
- Multiple expanded entries
- Allow multiple entries expanded simultaneously
- Collapse all keybinding (
c)
- Fix: expanding a line near the bottom of the screen shows empty lines when expansion doesn't fit — viewport should auto-scroll up so expanded content is visible
- Scrolling within expanded content
- Handle very large expanded content (huge JSON)
- Nested scrolling or pagination
- Copy expanded content
-
yto yank/copy expanded content to clipboard
-
- Add tests
Display Modes (per entry):
- Raw: Word-wrapped full text (default for non-JSON)
- JSON: Pretty-printed with syntax highlighting
- Auto: Detect format and choose appropriate mode
Future:
- Collapsible JSON nodes (expand/collapse nested objects)
- Table view for structured logs
- Custom formatters for known log formats
Goal: Show log statistics in the left panel below the source list
UI Layout:
┌──────────────────────┬─────────────────────────────────────────┐
│ Sources │ [log content] │
│──────────────────────│ │
│ > api.log │ 142 INFO Starting server... │
│ worker.log │ 143 DEBUG Connected to database │
│ db.log │ 144 ERROR Failed to connect ← red│
│ │ │
│──────────────────────│ │
│ Stats │ │
│──────────────────────│ │
│ Lines: 1,234 │ │
│ Filtered: 892 │ │
│ │ │
│ ERROR 12 │ │
│ WARN 45 │ │
│ INFO 892 │ │
│ DEBUG 285 │ │
└──────────────────────┴─────────────────────────────────────────┘
Features:
- Total line count and filtered count
- Severity breakdown with counts (requires severity detection)
- Updates in real-time as file changes or filter applied
- Clickable severity levels to quick-filter (future)
Tasks:
- Stats panel UI component
- Render below source list in left panel
- Show total lines / filtered lines
- Collapsible section
- Basic stats tracking
- Line counts per tab
- Update on file reload
- Update on filter change
- Severity stats — ✅ v0.6.0
- Count per severity level
- Color-coded display
- Click to filter by severity
Current Status: ✅ Complete (v0.6.0) — stats panel shows line counts and severity histogram with color-coded display
Goal: Automatically detect log format and extract severity for highlighting and filtering
Status: ✅ Complete (v0.6.0) — columnar index system with byte-level severity detection
Severity Levels (standardized):
TRACE → DEBUG → INFO → WARN → ERROR → FATAL
Detection Sources:
| Format | Example | Severity Extraction | Status |
|---|---|---|---|
| JSON | {"level":"error","msg":"..."} |
Parse level, severity, lvl fields |
✅ v0.6.0 |
| Bracket | [ERROR] Failed to connect |
Match [LEVEL] pattern |
✅ v0.6.0 |
| Prefix | ERROR: Connection refused |
Match LEVEL: pattern |
✅ v0.6.0 |
| Syslog | <3>Jan 20 10:00:01 app[123]: msg |
Parse priority code | ✅ v0.6.0 |
| Log4j | 2024-01-20 ERROR com.app - msg |
Match known patterns | ✅ v0.6.0 |
| Kubernetes | E0120 10:00:01.123 file.go:42] |
First char: I/W/E/F | ✅ v0.6.0 |
UI Integration (Left Panel):
┌──────────────┬──────────────────────────────────────────────────────┐
│ Sources │ [log content] │
│──────────────│ │
│ > api.log │ 142 INFO Starting server... │
│ worker.log │ 143 DEBUG Connected to database │
│ db.log │ 144 ERROR Failed to authenticate ← red │
│ │ 145 WARN Retry attempt 2/3 ← yel │
│──────────────│ 146 INFO Request processed │
│ Severity │ │
│──────────────│ │
│ ○ FATAL 0 │ │
│ ● ERROR 12 │ ← active filter │
│ ○ WARN 45 │ │
│ ○ INFO 892 │ │
│ ○ DEBUG 234 │ │
│──────────────│──────────────────────────────────────────────────────│
│ [Bookmarks] │ Filter: database Showing 12/1183 ⟳ 100% │
└──────────────┴──────────────────────────────────────────────────────┘
Severity Section Features:
- Severity levels with counts (from current source)
- Toggle filtering: click/select to show only that level and above
●indicates active filter,○indicates inactive- Counts update as text filter changes
- Keybinding to cycle severity filter (e.g.,
sto cycle through levels)
Tasks:
- Format detection — ✅ v0.6.0
- Detect JSON lines (starts with
{, valid JSON) - Detect common text patterns (bracket, prefix, syslog)
- Per-line flag detection cached in columnar index
- Allow manual override per source (config hints)
- Detect JSON lines (starts with
- Severity parsing — ✅ v0.6.0
- JSON: check common fields (
level,severity,lvl,log.level) - Text: byte-level patterns for common formats
- Normalize to standard levels (TRACE/DEBUG/INFO/WARN/ERROR/FATAL)
- Handle case variations (error, ERROR, Error)
- JSON: check common fields (
- Severity highlighting — ✅ v0.6.0
- Color-code by severity (configurable colors)
- ERROR/FATAL: red
- WARN: yellow
- INFO: default
- DEBUG/TRACE: dim/gray
- Severity filtering
- Quick filter: show ERROR and above
- Keybinding to cycle minimum severity level
- Combine with text filter via query language
- Severity statistics — ✅ v0.6.0
- Count per severity level
- Show in side panel per source
- Click to filter by severity
- Add tests for format detection and parsing — ✅ v0.6.0
Future:
- Custom format definitions (regex-based) in config
- Timestamp parsing from detected format
- Auto-detect field names for structured logs
- Columnar index with flags, offsets, checkpoints — ✅ v0.6.0
- Index-accelerated filtering with bitmap pre-filtering — ✅ v0.6.0
Goal: Save filter history between sessions
Tasks:
- Add history file path (~/.config/lazytail/history.json)
- Load history on startup
- Save history after each filter submission
- Handle file read/write errors gracefully
- Add tests for persistence
Current Status: ✅ Complete
Benefits:
- Persistent workflow across sessions
- Better UX for repeated log analysis
Goal: Expand mouse support beyond scroll — make the UI fully mouse-interactive
Tasks:
- Click to select a log line
- Click source in side panel to switch tabs
- Click severity levels in stats panel to filter (when severity stats land)
- Click-and-drag to select text for copying
- Right-click context menu (expand, copy, filter by selection)
- Resize side panel by dragging the divider
- Double-click to expand/collapse a log line
Benefits:
- Lower barrier to entry for non-vim users
- Faster interaction for common actions (tab switching, line selection)
- Expected by users coming from GUI log viewers
Goal: Select a range of log lines for copying, exporting, or inspection
Behavior:
Venters visual/selection mode (vim-style),EscexitsShift+Up/Shift+DownorShift+j/Shift+kto extend selectionShift+Clickto select range from current line to clicked line- Click-and-drag to select range with mouse
- Selected lines highlighted with distinct background color
ycopies all selected lines to clipboardEscor movement without Shift clears selection- Selection works with both filtered and unfiltered views
Tasks:
- Selection state in viewport (anchor line + current line range)
- Visual mode toggle (
V) and keyboard range extension - Mouse selection (Shift+Click range, click-and-drag)
- Render selected lines with highlight background
-
ycopies selected range to clipboard (full raw content, newline-separated) - Selection count indicator in status bar ("3 lines selected")
Goal: Highlight filter matches in displayed text
Tasks:
- Detect filter pattern in rendered lines
- Apply highlight style to matching substrings
- Handle case sensitivity in highlighting
- Support regex pattern highlighting
- Add tests with mock rendering
- Make highlight colors configurable
Benefits:
- Visual feedback for matches
- Easier to spot relevant content
- Common feature in log viewers
Goal: Make the help overlay scrollable so all keybindings are visible on small terminals
Status: ✅ Complete (post-v0.7.0)
Tasks:
- Track a scroll offset for the help overlay
- Handle
j/kand↑/↓to scroll while help is open - Show a scroll indicator (e.g.
↓ moreat the bottom) - Clamp scroll so the last line stays visible
Goal: Add comprehensive logging to debug what LazyTail is doing internally
Motivation: Currently difficult to debug issues like:
- Why is filtering slow on this file?
- Which reader implementation is being used?
- Why did the file watcher trigger?
- What's happening during index builds?
- Why is follow mode not working?
- Performance bottlenecks in the event loop
Logging Framework:
- Use
tracingcrate (better thanlogfor structured context) - Support multiple output targets (stderr, file, structured JSON)
- Configurable per-module log levels
- Span-based instrumentation for performance tracing
Key Areas to Instrument:
-
File Operations
- Reader selection (FileReader vs HugeFileReader vs StreamReader)
- File watching events (what changed, how many bytes)
- Index building progress and timing
- Mmap operations and failures
-
Filtering
- Filter orchestrator decisions (which engine is used)
- Filter progress (lines scanned, matches found, elapsed time)
- Streaming filter vs generic filter selection
- Query parsing and execution
-
Event Loop
- Event types received and processing time
- Debouncing decisions
- Frame timing (render, collect, process)
- Dropped frames / performance issues
-
MCP Server
- Tool invocations (which tool, parameters)
- Query execution time
- Result sizes
- Errors and failures
-
Capture Mode
- Lines captured per second
- Flush events
- Signal handling
- Marker file operations
Log Levels:
ERROR: Failures that impact functionalityWARN: Degraded performance, recoverable errorsINFO: Major operations (file opened, filter applied, source added)DEBUG: Detailed operation info (event types, state transitions)TRACE: Verbose instrumentation (every line read, every event)
Configuration:
# Enable debug logs for filter module
RUST_LOG=lazytail::filter=debug lazytail app.log
# Enable trace for everything
RUST_LOG=trace lazytail app.log
# Log to file
RUST_LOG=debug lazytail app.log 2> debug.log
# Structured JSON output
RUST_LOG_FORMAT=json RUST_LOG=debug lazytail --mcpPerformance Tracing:
use tracing::{info_span, instrument};
#[instrument(skip(reader))]
fn apply_filter(reader: &dyn LogReader, filter: Arc<dyn Filter>) {
let _span = info_span!("apply_filter", total_lines = reader.total_lines()).entered();
// ... filtering logic
// Automatically logs: duration, total_lines, function args
}Tasks:
- Add
tracingandtracing-subscriberdependencies - Initialize tracing subscriber in main.rs
- Support RUST_LOG env var
- Support RUST_LOG_FORMAT (text/json/compact)
- Support --log-file flag
- Instrument core modules
- Reader selection and file operations (reader/)
- Filter orchestration and execution (filter/)
- Event loop and debouncing (main.rs)
- Index building (index/builder.rs)
- File watching (watcher.rs, dir_watcher.rs)
- MCP server (mcp/)
- Capture mode (capture.rs)
- Add span instrumentation for performance-critical paths
- Filter execution (per-filter timing)
- Index building (progress tracking)
- File reading (lines/sec, bytes/sec)
- Add diagnostic commands
-
lazytail --version --verbose- show build info, feature flags -
lazytail doctor- check config, permissions, verify setup
-
- Document logging in README and troubleshooting guide
- How to enable debug logs
- Common patterns for debugging issues
- Performance profiling with TRACE logs
Benefits:
- Debuggability: Understand what LazyTail is doing without recompiling
- Performance analysis: Find bottlenecks with span timing
- User support: Ask users for logs instead of guessing
- Development: Faster iteration when debugging issues
- Production monitoring: Track MCP server performance
- Streaming filter with mmap for large files
- SIMD-accelerated search using memchr/memmem
- Grep-style lazy line counting for case-sensitive search
- MCP search optimized with streaming filter (tested on 5GB+ files)
- FilterProgress::Complete includes lines_processed for accurate tracking
- Columnar index system — ✅ v0.6.0
- Per-line flags (severity, ANSI, JSON, logfmt, timestamp markers)
- O(1) line access via mmap-backed offset column
- Index-accelerated filtering with bitmap pre-filtering
- Incremental index building during capture mode
- ~2.5s to index 60M lines (9GB file)
- ANSI-aware severity detection with memchr-assisted scanning
- Search Result Bitmap Cache (Roaring Bitmaps)
- Persist filter results as Roaring Bitmaps alongside columnar index files
- Cache files live in
.lazytail/idx/{source_name}/search/, keyed by filter pattern - Searching the same term twice returns results without re-scanning the log file
- Bitmap is extended (not invalidated) when log file grows with appended lines — only scan new lines
- Bitmap is invalidated and rebuilt when log content changes (truncation, rotation)
- Two bitmaps can be AND/OR'd to resolve compound queries without scanning log content
- Cache miss falls through transparently to
streaming_filter— no visible behavior change - Especially impactful on large files (5GB+) where re-scanning on every keystroke is expensive
- See
.planning/ROADMAP.mdPhase 6 for full design notes
- Compressed file support & log/index compression
- Reading compressed logs: transparently open
.gz,.zst,.lz4files (detect by extension or magic bytes)- Decompression on-the-fly during reading (streaming decompressor wrapping
LogReader) - Support common formats: gzip (
.log.gz), zstd (.log.zst), lz4 - Rotated log archives (
app.log.1.gz) should just work as sources
- Decompression on-the-fly during reading (streaming decompressor wrapping
- Capture-time compression: compress logs written by
lazytail -nto save disklazytail -n "API" --compress zstd— compress as data is captured- Configurable in
lazytail.yaml:compression: zstd(default: none) - Zstd preferred (best ratio/speed tradeoff, streaming-friendly)
- Index compression: reduce index file size for large logs
- Columnar index files (
.idx) can grow large for 100M+ line files - Delta-encode line offsets (monotonically increasing → small deltas)
- Optional zstd compression for flags column (repetitive severity patterns compress well)
- Mmap compatibility: consider memory-mapped compressed blocks vs decompress-on-load tradeoff
- Columnar index files (
- Considerations:
- Compressed files lose O(1) random line access — need block-level index or full decompression
- Block compression (zstd seekable format) could preserve random access with modest overhead
- Filter performance impact: streaming filter (mmap + SIMD) won't work on compressed data — need decompress-then-filter path
- Prioritize read support first (common need), write compression second
- Reading compressed logs: transparently open
- Performance profiling on very large files (100GB+)
- Optimize ANSI parsing (cache parsed lines?)
- Benchmark filtering performance
- Further optimize case-insensitive search
Goal: Per-project log sources and configuration, auto-discovered by ancestry
Status: Core config system implemented (v0.4.0)
Discovery Order:
- Check current dir and ancestors for
lazytail.yaml - If found → project mode (use
.lazytail/in that dir) - If not found → global mode (
~/.config/lazytail/)
Directory Structure:
my-project/
├── lazytail.yaml # Config (committed to git)
├── .lazytail/ # Data (gitignored)
│ ├── data/ # Captured logs
│ ├── sources/ # Active markers
│ └── history.json # Project-specific filter history
└── src/
lazytail.yaml Example:
# Source definitions (path-based only)
sources:
- name: Database
path: /var/log/postgresql/postgresql.log
- name: App
path: ./logs/app.log # Relative to project root
- name: Nginx
path: /var/log/nginx/access.logBenefits:
- Team shares source definitions via git
- AI assistants (Claude Code) auto-discover project logs
- No pollution of global config
- Project-specific filter history
- Different projects can have different log setups
Tasks:
- Config file discovery (walk ancestors for
lazytail.yaml) - Parse YAML config with serde
- Create
.lazytail/directory structure - Support
path:sources (watch existing file) - Relative path resolution from project root
- Filter presets in config
- MCP: detect project root and scope sources
- Fallback to global
~/.config/lazytail/when no project found
- Configuration file (
lazytail.yamlwith project + global scope)- System-wide and project-scoped log source definitions (name, path)
- Pre-configured sources appear automatically in discovery mode
- Custom source groups/categories
- Default filter patterns per source
- UI preferences (colors, panel width, default modes)
- MCP server settings (enabled tools, access control)
- CLI Subcommands — extend CLI to be a full log management tool, not just a TUI launcher
lazytail sources/lazytail list— list all available sources- Show name, path, active/ended status, total lines, file size
- Useful for scripting, piping into other tools, quick inspection
lazytail sources --jsonfor machine-readable output
lazytail search <pattern> [source]— filter/search from CLI without opening TUIlazytail search "error" API— search a specific source by namelazytail search "error" /var/log/app.log— search a file directlylazytail search --regex "err(or|no)" API— regex modelazytail search --query 'json | level == "error"'— query language support- Output to stdout (pipe-friendly, like grep but with lazytail's engines)
- Support
--countfor match count only,--context Nfor surrounding lines - Color output by default (respects
--no-color/NO_COLORenv) - Uses streaming filter / SIMD search — same performance as TUI and MCP
lazytail tail [source]— tail a source from CLI (liketail -fbut source-aware)lazytail tail API— tail a named sourcelazytail tail API --follow/-f— follow mode (live stream new lines)lazytail tail API -n 50— last 50 lines- Combines with search:
lazytail tail API -f | grep erroror built-in--filter
lazytail clear— clear captured log fileslazytail clear— clear all ended sourceslazytail clear <name>— clear a specific source by namelazytail clear --all— clear all sources including active ones (with confirmation)- Respect project scoping (clear project logs when
lazytail.yamlis present, global otherwise) - Confirmation prompt before destructive action (skip with
--yes/-y)
lazytail add <name> --path <path>— register an existing log file as a named source- MCP equivalent:
add_sourcetool for AI agents - Currently sources can only be added via
lazytail -n,lazytail.yaml, or placing files in data dir - This enables dynamic source management without editing config
- MCP equivalent:
lazytail rm <name>— remove/unregister a source- Remove marker file and optionally delete the log file (
--delete-data) lazytail rm --ended— remove all ended sources
- Remove marker file and optionally delete the log file (
- All subcommands respect project scoping (
lazytail.yaml→.lazytail/, otherwise~/.config/lazytail/)
- JSON pretty collapsible viewer in TUI
- Detect JSON content in log lines automatically
- Pretty-print with syntax highlighting (keys, values, types)
- Collapsible/expandable nested objects and arrays (tree-style navigation)
- Expand/collapse individual nodes with keybindings
- Integrates with existing line expansion (
Spaceto toggle) - Filter by JSON field values
- Multiple display modes
- Raw view (current)
- Compact view (truncate long lines)
- JSON formatted view
- Table view (for structured logs)
- Conversation view (for AI chat JSONL)
- AI conversation JSONL viewer
- Detect JSONL files with
role/contentfields (OpenAI, Anthropic, generic chat formats) - Render as a conversation: role labels (User/Assistant/System), indented message bubbles
- Syntax-highlight code blocks within messages
- Collapse/expand individual messages
- Filter by role (
json | role == "assistant") - Useful for inspecting LLM training data, API logs, chat transcripts
- Detect JSONL files with
- Bookmarks (mark lines for quick navigation)
- Export filtered results to file
- Copy selected line to clipboard with
y— ✅ post-v0.7.0- Context-aware: copy selected line content (ANSI-stripped) in log view, source path in side panel
- Full raw line content (not truncated)
- Visual feedback (status bar message: "Copied: ...")
- OSC 52 escape sequence for terminal clipboard access (works over SSH/tmux)
- Fallback to
xclip/xsel/wl-copy/pbcopyif OSC 52 not supported
- Timestamp parsing and time-based filtering
- Detect common timestamp formats
- Filter by time range
- Jump to specific timestamp
- Self-update (
lazytail update) — ✅ v0.7.0- Use the
self_updatecrate to check GitHub Releases and replace the binary in-place lazytail update— check for new version and install if availablelazytail update --check— check only, don't install (exit code 0 = up to date, 1 = update available)- Background update check on TUI startup (non-blocking, cached to
~/.config/lazytail/update_check.json) - Only check once every 24h to avoid API rate limits and startup latency
- Print subtle notice after TUI exits if update is available (not during — would interfere with ratatui)
--no-update-checkflag and config option (update_check: false) to disable automatic checks- Respect AUR users: detect if installed via package manager and suggest
yay -S lazytailinstead of self-replacing - Feature-gated behind
self-updatecargo feature (included in GitHub release builds, excluded from AUR)
- Use the
- TUI colors configuration / theme customization
- Configurable colors via
lazytail.yaml(e.g.,theme:section) - Customizable elements: side panel, selected line, status bar, filter input, borders, active/ended indicators
- Support named colors (
red,cyan) and hex (#ff5555) - Built-in themes (e.g., default, light, solarized) with option to override individual colors
- Respect terminal color scheme where possible
- Configurable colors via
- Keybindings configuration
- Configurable keybindings via
lazytail.yaml(e.g.,keybindings:section) - Override default vim-style bindings with custom keys
- Support modifier keys (
Ctrl,Alt,Shift) and key combinations - Sensible defaults that work out of the box, customization for power users
- Configurable keybindings via
- Merged/chronological view for multiple sources
- Parse timestamps from all sources
- Display merged timeline
- Color-code by source
- Command-based sources (future consideration)
- Define sources as commands in config:
command: "docker logs -f api" - LazyTail spawns and manages the process
- Auto-restart on failure?
- Security implications (arbitrary command execution)
- Alternative: keep using
cmd | lazytail -n "Name"pattern - Needs more thought on UX and lifecycle management
- Define sources as commands in config:
- Tmux-aware capture
- During
lazytail -n, detect tmux session via$TMUX/$TMUX_PANEenv vars - Store tmux coordinates (session:window.pane) in marker file alongside PID
- Expose tmux context in
list_sourcesresponse when available - No new MCP tools — agent has bash access and can use the info however it sees fit
- During
- Integration tests for full app behavior
- UI snapshot testing
- Performance benchmarks in CI
- Release automation improvements — ✅ 2026-02-20
- Auto-trigger release builds when release-please creates releases
- Binaries automatically attached to GitHub releases
- Pre-built binaries for Windows
Current Tools (v0.6.0):
| Tool | Purpose | Status |
|---|---|---|
list_sources |
Discover available log sources | ✅ Complete |
get_lines |
Read lines from position | ✅ Complete |
get_tail |
Read last N lines | ✅ Complete |
search |
Find pattern matches + structured queries | ✅ Complete |
get_context |
Get lines around a match | ✅ Complete |
get_stats |
Index metadata and severity breakdown | ✅ Complete (v0.6.0) |
Common Parameters (all tools except list_sources):
| Parameter | Type | Default | Description |
|---|---|---|---|
output |
text/json | text | Response format (text reduces escaping for AI) |
raw |
bool | false | Keep ANSI escape codes (default strips them) |
Current search Parameters:
| Parameter | Type | Status |
|---|---|---|
source |
String | ✅ Done |
pattern |
String | ✅ Done (optional when using query) |
mode |
plain/regex | ✅ Done |
case_sensitive |
bool | ✅ Done |
max_results |
usize | ✅ Done |
context_lines |
usize | ✅ Done |
query |
FilterQuery | ✅ Done (JSON/logfmt field filtering with exclusions) |
time_range |
TimeRange | ❌ Missing |
Planned search Enhancements:
| Feature | Purpose | Priority |
|---|---|---|
time_range param |
Filter by timestamp range | 🟡 Medium |
| Search pagination / cursor | When results exceed max_results (capped at 1000), there's no cursor or offset to fetch the next page. Currently the only workaround is adding more filters to narrow results. A cursor/offset mechanism would allow iterating through large result sets. |
🟡 Medium |
Completed list_sources Enhancements:
| Feature | Purpose | Version |
|---|---|---|
| Include total line count per source | Callers shouldn't need a separate call to know source size. Useful for calculating offsets, gauging search scope, etc. | ✅ v0.6.0 (via get_stats) |
Planned get_tail Enhancements:
| Feature | Purpose | Priority |
|---|---|---|
since_line parameter |
Return only lines after a given line number. Enables efficient incremental polling of active sources without re-fetching or deduplicating. | ✅ Complete |
Planned get_lines Enhancements:
| Feature | Purpose | Priority |
|---|---|---|
| Negative indexing / "from end" shorthand | Reading last N lines without knowing total_lines first (get_tail covers most cases, but minor friction when you need a specific offset from end) |
🟢 Low |
Completed Tools:
| Tool | Purpose | Version |
|---|---|---|
get_stats |
Index metadata, severity breakdown, total lines, file size. Lightweight — reads index metadata only, no content scanning. Helps decide whether to tail or search, and whether a source is healthy. | ✅ v0.6.0 |
Planned New Tools:
| Tool | Purpose | Priority |
|---|---|---|
aggregate |
✅ Implemented via search query aggregate param. Count by field, top N. Both text queries (count by (field) | top N) and MCP JSON work. |
✅ Complete |
search_sources |
Search multiple sources at once, grouped results by source name. Essential for cross-service correlation (e.g., "find this request ID across all services"). Doesn't require timestamps or merging — just run the same query across all sources. | 🔴 High |
fields |
Sample N lines from a source and return discovered field names, types, and example values. Makes structured queries far more usable — currently consumers must get_tail a few lines and visually parse JSON to discover field names before constructing a query. Critical for LLM consumers that can't eyeball the data. |
🔴 High |
summarize |
Log overview: time range, top patterns, top services, error rate. Content-analysis based summary. | 🟡 Medium |
add_source |
Register an existing log file as a named source. Lets AI agents dynamically add sources without editing config or piping data. CLI equivalent: lazytail add <name> --path <path>. |
🟡 Medium |
export |
Dump filtered results to a file or return in bulk. Supports query filters, time range, and output format. Useful for "save me all errors from the last hour" workflows. TUI has export in backlog but MCP needs its own path since results are capped at 1000. | 🟢 Low |
Internal Improvements Done:
- ✅ Streaming filter with mmap (grep-like performance)
- ✅ SIMD-accelerated search (memchr/memmem)
- ✅
lines_searchedtracking in FilterProgress::Complete - ✅ Single-pass content extraction for matched lines
- ✅ Plain text output format (eliminates JSON escaping explosion for AI consumption)
Already Completed (landed post-v0.4.0):
- ✅ FilterQuery AST with serde derives (JSON interface for MCP)
- ✅
queryparameter wired into MCPsearchtool - ✅ Exclusion patterns (
excludefield in query) - ✅
logfmtparser support - ✅ Nested field access (
user.id) - ✅ Text query parser for UI (
json | level == "error") - ✅ All comparison operators (eq, ne, regex, not_regex, contains, gt, lt, gte, lte)
Remaining:
- Time range filtering (timestamp field detection, after/before)
Aggregation (✅ Completecount by (field),top N)- Additional aggregation types: avg, sum, min/max, percentiles, rate, count_distinct, histogram
- Filter presets from config available in MCP
Sidecar Index (.log.idx):
- Binary index file alongside each captured log
- Store arrival timestamp + byte offset per line
- Append to index in real-time during capture
- Header with validation: file size, mtime, first-4KB hash
- Auto-rebuild on corruption/truncation detection
- Enables time-based operations and merging
Combined Source View:
- Merge multiple sources into single chronological view
- Use sidecar timestamps for captured sources
- Parse timestamps from log content for external files
- Fallback to arrival order for streaming, concatenation for static
- Source-colored lines or
[SOURCE]prefix - Filter by source:
source:API
Goal: MCP server automatically scopes to the project when lazytail.yaml is present
Design Questions:
- MCP server should detect
lazytail.yamlby walking parent directories from CWD (same as TUI) - When project-scoped:
list_sourcesreturns project sources + config-defined sources - When global:
list_sourcesreturns~/.config/lazytail/data/sources (current behavior) - Sources from
lazytail.yamlsources:definitions should appear alongside captured sources - Filter presets defined in config should be available via a
list_presetsor similar mechanism - Consider: should MCP expose both project and global sources, or only project when scoped?
- Update this roadmap with detailed tasks
- Consider impact on existing tests
- Plan for backward compatibility
- Review CLAUDE.md for implementation guidance
- Write tests first (TDD when appropriate)
- Run pre-commit checks frequently
- Keep commits focused and atomic
- Update documentation as you go
- All tests pass (cargo test)
- Clippy clean (cargo clippy -- -D warnings)
- Formatted (cargo fmt)
- Documentation updated
- Roadmap updated to mark task complete
- This roadmap is a living document - update as priorities change
- Focus on one major feature at a time
- Keep production stability as top priority
- User feedback will shape future direction