This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Monibuca is a high-performance streaming server framework written in Go. It's designed to be a modular, scalable platform for real-time audio/video streaming with support for multiple protocols including RTMP, RTSP, HLS, WebRTC, GB28181, and more.
Basic Run (with SQLite):
cd example/default
go run -tags sqlite main.goBuild Tags:
sqlite- Enable SQLite database supportsqliteCGO- Enable SQLite with CGOmysql- Enable MySQL database supportpostgres- Enable PostgreSQL database supportduckdb- Enable DuckDB database supportdisable_rm- Disable memory poolfasthttp- Use fasthttp instead of net/httptaskpanic- Enable panics for testings3- Enable AWS S3 / MinIO storage backendcos- Enable Tencent Cloud COS storage backendoss- Enable Alibaba Cloud OSS storage backendenable_buddy- Enable buddy memory allocator
Protocol Buffer Generation:
# Must use scripts — never use raw protoc commands
sh scripts/protoc.sh # Generate all proto files
sh scripts/protoc.sh plugin_name # Generate specific plugin protoWindows:
.\scripts\protoc.bat
.\scripts\protoc.bat plugin_nameRelease Building:
goreleaser buildTesting:
go test ./... # all tests
go test ./plugin/rtsp/pkg # single package
go test ./test -run '^TestRestart$' -count=1 # single test
go test ./pkg/util -bench . -run '^$' # benchmarks only
go test -race ./... # race detectorLint / Static Analysis:
gofmt -w <changed-files>
go vet ./...
staticcheck ./... # config in staticcheck.confServer (server.go): Main server instance that manages plugins, streams, and configurations. Implements the central event loop and lifecycle management. Managed via task.RootManager[uint32, *Server].
Plugin System (plugin.go): Modular architecture where functionality is provided through plugins. Each plugin embeds task.Work (persistent queue manager) and implements the IPlugin interface. Plugins can provide:
- Protocol handlers (RTMP, RTSP, etc.)
- Media transformers
- Pull/Push proxies
- Recording capabilities
- Custom HTTP endpoints
Configuration System (pkg/config/): Hierarchical configuration system with priority order (high to low):
- Modify — dynamic runtime modifications
- Env — environment variables (uppercase, underscore-separated prefix)
- File — config file (e.g.,
config.yaml) - defaultYaml — embedded default YAML
- Global — global config section
- Default — struct tag
default:"..."values
Task System (pkg/task/): Advanced asynchronous task management system with multiple layers:
- Task: Basic unit of work with lifecycle management (Start/Run/Dispose)
- Job: Container that manages multiple child tasks and provides event loops
- Work: Special type of Job that acts as a persistent queue manager (keepalive=true)
- Channel: Event-driven task for handling continuous data streams
Storage System (pkg/storage/): Abstracted file storage with pluggable backends:
- Local — local filesystem (always available)
- S3 — AWS S3 / MinIO (build tag:
s3) - COS — Tencent Cloud COS (build tag:
cos) - OSS — Alibaba Cloud OSS (build tag:
oss)
All backends implement Storage and File interfaces. Upload retry logic is centralized in pkg/storage/retry.go via UploadWithRetry().
Work (Queue Manager, keepalive=true)
└── Job (Container with Event Loop)
└── Task (Basic Work Unit)
├── Start() - Initialization phase
├── Run() - Main execution phase
└── Dispose() - Cleanup phase
- No child tasks → embed
task.Task - Need child tasks, stay alive → embed
task.Work(persistent queue manager) - Need child tasks, auto-exit when children done → embed
task.Job - Need timer → embed
task.TickTask - Need semaphore/signal → embed
task.ChannelTask
- CAN override:
Start(),Dispose() - CANNOT override:
Stop()— useStop(reason)to stop a task from outside - CANNOT call any
task.Taskmethod directly exceptStop() - CANNOT call any
task.Jobmethod directly exceptAddTask() - Return
task.ErrTaskCompletefor successful completion inRun()
INIT → STARTING → STARTED → RUNNING → GOING → DISPOSING → DISPOSED
The Task system supports sophisticated queue-based processing patterns:
- Work as Queue Manager: Work instances stay alive indefinitely and manage queues of tasks
- Task Queuing: Use
workInstance.AddTask(task, logger)to queue tasks - Automatic Lifecycle: Tasks are automatically started, executed, and disposed
- Error Handling: Built-in retry mechanisms and error propagation
Example Pattern (from S3 plugin):
type UploadQueueTask struct {
task.Work // Persistent queue manager
}
type FileUploadTask struct {
task.Task // Individual work item
// ... task-specific fields
}
// Initialize queue manager (typically in init())
var uploadQueueTask UploadQueueTask
m7s.Servers.AddTask(&uploadQueueTask)
// Queue individual tasks
uploadQueueTask.AddTask(&FileUploadTask{...}, logger)Tasks can coordinate across different plugins through:
- Global Instance Pattern: Plugins expose global instances for cross-plugin access
- Event-based Triggers: One plugin triggers tasks in another plugin
- Shared Queue Managers: Multiple plugins can use the same Work instance
Example (MP4 → S3 Integration):
// In MP4 plugin: trigger S3 upload after recording completes
s3plugin.TriggerUpload(filePath, deleteAfter)
// S3 plugin receives trigger and queues upload task
func TriggerUpload(filePath string, deleteAfter bool) {
if s3PluginInstance != nil {
s3PluginInstance.QueueUpload(filePath, objectKey, deleteAfter)
}
}Publisher: Handles incoming media streams and manages track information Subscriber: Handles outgoing media streams to clients Puller: Pulls streams from external sources Pusher: Pushes streams to external destinations Transformer: Processes/transcodes media streams Recorder: Records streams to storage
- Publisher receives media data and creates tracks
- Tracks handle audio/video data with specific codecs
- Subscribers attach to publishers to receive media
- Transformers can process streams between publishers and subscribers
- Plugins provide protocol-specific implementations
Monibuca implements a sophisticated post-recording processing pipeline:
- Recording Completion: MP4 recorder finishes writing stream data
- Trailer Writing: Asynchronous task moves MOOV box to file beginning for web compatibility
- File Optimization: Temporary file operations ensure atomic updates
- External Storage Integration: Automatic upload to S3-compatible services with retry
- Cleanup: Optional local file deletion after successful upload
This workflow uses queue-based task processing to avoid blocking the main recording pipeline.
- Implement the
IPlugininterface (which inheritstask.IJob) - Define plugin metadata using
PluginMeta - Register with
InstallPlugin[YourPluginType](meta)— auto-detects name and version via reflection - Optionally implement protocol-specific interfaces:
ITCPPluginfor TCP serversIUDPPluginfor UDP serversIQUICPluginfor QUIC serversIRegisterHandlerfor HTTP endpointsIPublishHookPlugin/ISubscribeHookPluginfor stream hooks
- Init: Configuration parsing and initialization
- Start: Network listeners and task registration
- Run: Active operation
- Dispose: Cleanup and shutdown
// Expose global instance for cross-plugin access
var s3PluginInstance *S3Plugin
func (p *S3Plugin) Start() error {
s3PluginInstance = p // Set global instance
// ... rest of start logic
}
// Provide public API functions
func TriggerUpload(filePath string, deleteAfter bool) {
if s3PluginInstance != nil {
s3PluginInstance.QueueUpload(filePath, objectKey, deleteAfter)
}
}// In one plugin: trigger event after completion
if t.filePath != "" {
t.Info("MP4 file processing completed, triggering S3 upload")
s3plugin.TriggerUpload(t.filePath, false)
}Multiple plugins can share Work instances for coordinated processing.
type MyTask struct {
task.Task
// ... custom fields
}
func (t *MyTask) Start() error {
// Initialize resources, validate inputs
return nil
}
func (t *MyTask) Run() error {
// Main work execution
// Return task.ErrTaskComplete for successful completion
return nil
}type MyQueueManager struct {
task.Work
}
var myQueue MyQueueManager
func init() {
m7s.Servers.AddTask(&myQueue)
}
// Queue tasks from anywhere
myQueue.AddTask(&MyTask{...}, logger)- Tasks automatically support retry mechanisms
- Use
task.SetRetry(maxRetry, interval)for custom retry behavior - Return
task.ErrTaskCompletefor successful completion - Return other errors to trigger retry or failure handling
- All field names must be lowercase in YAML config files
- Field name normalization: lowercase, removes underscores/hyphens
- The
"plugin"field is always skipped during parsing - Config structs use
default:"..."anddesc:"..."tags
- HTTP/TCP/UDP/QUIC listeners
- Database connections (SQLite, MySQL, PostgreSQL, DuckDB)
- Authentication settings
- Admin interface settings
- Global stream alias mappings
Each plugin can define its own configuration structure that gets merged with global settings.
Supports multiple database backends:
- SQLite: Default lightweight option
- MySQL: Production deployments
- PostgreSQL: Production deployments
- DuckDB: Analytics use cases
Automatic migration is handled for core models including users, proxies, and stream aliases.
- RTMP: Real-time messaging protocol
- RTSP: Real-time streaming protocol
- HLS: HTTP live streaming
- WebRTC: Web real-time communication
- GB28181: Chinese surveillance standard
- FLV: Flash video format
- MP4: MPEG-4 format with post-processing capabilities
- SRT: Secure reliable transport
- S3: File upload integration with AWS S3/MinIO compatibility
- JWT-based authentication for admin interface
- Stream-level authentication with URL signing
- Role-based access control (admin/user)
- Webhook support for external auth integration
- Follow existing patterns and naming conventions
- Use the task system for async operations; never use bare goroutines — prefer
AddTask - Implement proper error handling and logging
- Use the configuration system for all settings
- Dot imports are discouraged; exception:
staticcheck.confwhitelists. "m7s.live/v5/pkg"
- Use structured logging (
slog): always pass key-value pairs —t.Info("msg", "key", value) - Use task's built-in logger methods (
t.Info/Warn/Error/Debug) rather thanlog.Printf - Keep log messages short with context in key-value fields
- Must use
sh scripts/protoc.sh(global) orsh scripts/protoc.sh <plugin_name>(per-plugin) - Never use raw
protoccommand lines directly
- Storage backends (S3, COS, OSS) are guarded by build tags (
s3,cos,oss) - All backends implement the
StorageandFileinterfaces inpkg/storage/storage.go - Upload retry logic is centralized in
pkg/storage/retry.goviaUploadWithRetry() File.Close()triggers upload for object storage backends; usedeferto ensure temp file cleanup
- Unit tests should be placed alongside source files
- Integration tests can use the example configurations
- Use the mock.py script for protocol testing
- After edits, at minimum:
gofmt -w <changed-files>thengo test <affected-package> -count=1
- Memory pool is enabled by default (disable with
disable_rm) - Zero-copy design for media data where possible
- Lock-free data structures for high concurrency
- Efficient buffer management with ring buffers
- Queue-based processing prevents blocking main threads
- Performance monitoring and profiling
- Real-time metrics via Prometheus endpoint (
/api/metrics) - pprof integration for memory/cpu profiling
- Structured logging with slog
- Configurable log levels
- Log rotation support
- Fatal crash logging
- Tasks automatically include detailed logging with task IDs and types
- Use
task.Debug/Info/Warn/Errormethods for consistent logging - Task state and progress can be monitored through descriptions
- Event loop status and queue lengths are logged automatically
- Web-based admin UI served from
admin.zip - RESTful API for all operations
- Real-time stream monitoring
- Configuration management
- User management (when auth enabled)
- Default HTTP port: 8080
- Default gRPC port: 50051
- Check plugin-specific port configurations
- Ensure proper build tags for database support
- Check DSN configuration strings
- Verify database file permissions
- Plugins are auto-discovered from imports
- Check plugin enable/disable status
- Verify configuration merging
- Ensure Work instances are added to server during initialization
- Check task queue status if tasks aren't executing
- Verify proper error handling in task implementation
- Monitor task retry counts and failure reasons in logs
- Forgetting required build tags for DB/protocol/storage-specific behavior
- Editing
.protobut not regenerating withscripts/protoc.sh - Adding non-whitelisted dot imports
- Overriding
Stop()instead of usingStop(reason)from outside - Calling
task.Taskmethods directly (onlyStop()is allowed) - Using uppercase field names in YAML config files
- Using bare goroutines instead of the task system
- Bypassing task lifecycle conventions in async plugin code