🚀 Comprehensive performance benchmarking suite comparing Go web frameworks with atomic, deterministic, and resumable test execution.
- 🎯 Overview
- 🏗️ Framework Comparison
- 📊 Benchmark Scenarios
- 🧪 Test Environment
- 📈 Results
- 🚀 Quick Start
- ⚙️ Configuration
- 📚 Documentation
- 🤝 Contributing
This repository contains a comprehensive benchmarking suite designed to evaluate the performance of Go web frameworks with a focus on atomic, deterministic, and resumable test execution. Our goal is to provide accurate, reproducible, and meaningful performance comparisons across various real-world scenarios.
Framework | Version | Description |
---|---|---|
🔥 GoFlash | Latest | High-performance, minimalist Go web framework |
🍸 Gin | Latest | Fast HTTP web framework with martini-like API |
🕷️ Fiber | v2.52.0 | Express-inspired web framework built on Fasthttp |
📢 Echo | v4.11.4 | High performance, extensible, minimalist Go web framework |
🔗 Chi | v5.0.11 | Lightweight, expressive and scalable HTTP router |
- GoFlash: Optimized for speed with minimal overhead
- Gin: Battle-tested with excellent middleware ecosystem
- Fiber: Express.js-like API with high performance
- Echo: High performance with extensible middleware
- Chi: Lightweight and expressive routing
Each framework excels in different scenarios, making this benchmark crucial for informed decision-making in your next Go project.
Our benchmark suite covers 9 comprehensive scenarios that represent common web application patterns:
📝 Click to expand scenario details
# | Scenario | Description | Real-world Impact |
---|---|---|---|
1️⃣ | Simple Ping/Pong | Basic endpoint response | Foundation performance |
2️⃣ | URL Path Parameter | Dynamic route parsing | RESTful API endpoints |
3️⃣ | Request Context | Context read/write operations | State management |
4️⃣ | JSON Binding | Request deserialization + validation | API data processing |
5️⃣ | Wildcard Routing | Trailing wildcard route matching | File serving, catch-all routes |
6️⃣ | Route Groups | Basic route organization | API versioning |
7️⃣ | Deep Route Groups | 10-level nested groups | Complex routing hierarchies |
8️⃣ | Single Middleware | Basic middleware processing | Authentication, logging |
9️⃣ | Middleware Chain | 10-middleware processing chain | Complex request pipelines |
- Machine: Apple MacBook Pro (M3 chip)
- Memory: 32 GB RAM
- Architecture: ARM64
- Load Generator: wrk HTTP benchmarking tool
- Threads: 4 concurrent threads
- Connections: 50 concurrent connections
- Protocol: HTTP/1.1 with keep-alive
- ✅ Functionally equivalent handlers across all frameworks
- ✅ Production/release build settings enabled
- ✅ Consistent routing patterns and middleware implementation
- ✅ Multiple test runs for statistical significance
- ✅ Isolated server processes to prevent interference
- ✅ Atomic and deterministic test execution
- ✅ Resume capability from failed runs
⚠️ Note: Results are indicative and may vary based on workload, configuration, and environment. Always benchmark in your specific use case.
📊 Complete dataset available: Detailed CSV files and additional metrics can be found in the
results/2025-08-26/
directory.
Our comprehensive benchmarks reveal significant performance differences across frameworks and scenarios. Below are the key findings from 54 total benchmark tests:
🏆 Rank | Framework | Avg RPS | Min RPS | Max RPS | Tests | Performance |
---|---|---|---|---|---|---|
🥇 | Fiber v3 | 283,816 | 240,999 | 303,030 | 9 | 🔥 Excellent |
🥈 | Fiber | 280,118 | 250,018 | 290,845 | 9 | ⚡ Very Good |
🥉 | Gin | 221,379 | 197,620 | 232,125 | 9 | ✅ Good |
#4 | Chi | 220,362 | 200,062 | 235,505 | 9 | 📊 Baseline |
#5 | Echo | 215,519 | 196,234 | 231,230 | 9 | 📊 Baseline |
#6 | GoFlash | 212,779 | 162,382 | 225,325 | 9 | 📊 Baseline |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 303,030 | 100% (Leader) |
🥈 | Fiber | 280,031 | 92.4% of leader |
🥉 | Chi | 235,505 | 77.7% of leader |
#4 | Gin | 232,125 | 76.6% of leader |
#5 | GoFlash | 225,325 | 74.4% of leader |
#6 | Echo | 219,567 | 72.5% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 293,635 | 100% (Leader) |
🥈 | Fiber | 280,867 | 95.7% of leader |
🥉 | Gin | 223,626 | 76.2% of leader |
#4 | GoFlash | 222,000 | 75.6% of leader |
#5 | Chi | 221,712 | 75.5% of leader |
#6 | Echo | 214,485 | 73.0% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 283,458 | 100% (Leader) |
🥈 | Fiber | 282,358 | 99.6% of leader |
🥉 | Gin | 218,490 | 77.1% of leader |
#4 | Chi | 216,871 | 76.5% of leader |
#5 | GoFlash | 214,172 | 75.6% of leader |
#6 | Echo | 208,281 | 73.5% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 289,274 | 100% (Leader) |
🥈 | Fiber | 283,202 | 97.9% of leader |
🥉 | Echo | 231,230 | 79.9% of leader |
#4 | Chi | 228,242 | 78.9% of leader |
#5 | Gin | 225,711 | 78.0% of leader |
#6 | GoFlash | 219,170 | 75.8% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 292,412 | 100% (Leader) |
🥈 | Fiber | 288,075 | 98.5% of leader |
🥉 | Gin | 226,190 | 77.4% of leader |
#4 | GoFlash | 219,780 | 75.2% of leader |
#5 | Echo | 214,402 | 73.3% of leader |
#6 | Chi | 211,424 | 72.3% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber | 250,018 | 100% (Leader) |
🥈 | Fiber v3 | 240,999 | 96.4% of leader |
🥉 | Chi | 200,062 | 80.0% of leader |
#4 | Gin | 197,620 | 79.0% of leader |
#5 | Echo | 196,234 | 78.5% of leader |
#6 | GoFlash | 162,382 | 64.9% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber | 286,427 | 100% (Leader) |
🥈 | Fiber v3 | 282,417 | 98.6% of leader |
🥉 | Gin | 228,309 | 79.7% of leader |
#4 | Chi | 221,085 | 77.2% of leader |
#5 | Echo | 220,603 | 77.0% of leader |
#6 | GoFlash | 219,793 | 76.7% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber v3 | 284,418 | 100% (Leader) |
🥈 | Fiber | 279,242 | 98.2% of leader |
🥉 | Gin | 226,506 | 79.6% of leader |
#4 | Chi | 226,059 | 79.5% of leader |
#5 | GoFlash | 219,438 | 77.2% of leader |
#6 | Echo | 212,276 | 74.6% of leader |
🏆 Rank | Framework | Avg RPS | Performance vs Leader |
---|---|---|---|
🥇 | Fiber | 290,845 | 100% (Leader) |
🥈 | Fiber v3 | 284,705 | 97.9% of leader |
🥉 | Echo | 222,594 | 76.5% of leader |
#4 | Chi | 222,302 | 76.4% of leader |
#5 | Gin | 213,838 | 73.5% of leader |
#6 | GoFlash | 212,948 | 73.2% of leader |
🎯 Simple Ping/Pong Endpoint
Test: Basic HTTP GET response without any processing
Key Insights:
- Foundation performance comparison
- Measures framework overhead
- Critical for high-throughput applications
Results: CSV Data
🔗 URL Path Parameter Extraction
Test: Dynamic route matching and parameter extraction (/user/:id
)
Key Insights:
- RESTful API performance
- Router efficiency comparison
- Path parsing overhead analysis
Results: CSV Data
📝 Request Context Operations
Test: Writing to and reading from request context
Key Insights:
- Context management efficiency
- State preservation performance
- Middleware communication overhead
Results: CSV Data
📦 JSON Binding & Validation
Test: JSON request deserialization with struct binding and validation
Key Insights:
- API data processing performance
- Serialization/deserialization efficiency
- Validation overhead impact
Results: CSV Data
🌟 Wildcard Route Parsing
Test: Trailing wildcard route matching (/files/*path
)
Key Insights:
- File serving performance
- Catch-all route efficiency
- Dynamic path handling
Results: CSV Data
📁 Route Groups
Test: Basic route group organization (/api/v1/users
)
Key Insights:
- API organization efficiency
- Group routing overhead
- Nested structure performance
Results: CSV Data
🏗️ Deep Route Groups (10 Levels)
Test: Complex nested route groups (/g1/g2/.../g10/endpoint
)
Key Insights:
- Complex routing hierarchy performance
- Deep nesting overhead
- Scalability under complex structures
Results: CSV Data
⚙️ Single Middleware
Test: Basic middleware processing (e.g., request logging)
Key Insights:
- Middleware overhead analysis
- Basic processing pipeline performance
- Authentication/logging impact
Results: CSV Data
🔗 Middleware Chain (10 Middlewares)
Test: Complex middleware chain with 10 sequential middlewares
Key Insights:
- Complex pipeline performance
- Cumulative middleware overhead
- Enterprise-grade processing chains
Results: CSV Data
Framework | Port | Optimization |
---|---|---|
🔥 GoFlash | :17780 |
Production mode |
🍸 Gin | :17781 |
Release mode |
🕷️ Fiber | :17782 |
Production settings |
📢 Echo | :17783 |
Production mode |
🔗 Chi | :17784 |
Release mode |
Get up and running with the benchmark suite in minutes! Follow these step-by-step instructions:
- Go 1.21+ installed and configured
- wrk HTTP benchmarking tool
- macOS/Linux environment (recommended)
🛠️ Installing Prerequisites
brew install wrk
sudo apt-get install wrk
# Build all framework servers
./benchmark build
This command will:
- 📦 Download dependencies for all frameworks
- 🔨 Compile optimized production builds
- 📁 Place executables in
build/
directory
# 🏆 High-Volume Load Testing (1M requests, 10 batches for statistical significance)
go run ./cmd run --requests 1000000 --connections 100 --batches 10
# ⏱️ Duration-Based Testing (1 minute per test scenario)
go run ./cmd run --duration 1m --connections 50 --batches 3
# 🚀 Full benchmark suite (recommended for comprehensive analysis)
go run ./cmd run --requests 10000 --connections 50 --batches 3
# ⚡ Quick test (faster execution for development)
go run ./cmd run --requests 1000 --connections 10 --batches 1
# 🎯 Custom framework and scenario selection
go run ./cmd run --duration 30s --frameworks flash,gin,gofiber --scenarios simple,json,param
# 📊 Specific test configuration examples
go run ./cmd run --requests <requests> --connections <connections> --batches <batches>
go run ./cmd run --duration <duration> --connections <connections> --batches <batches>
Parameters:
--requests
: Total number of requests per scenario (use0
for duration-based testing)--duration
: Test duration per scenario (e.g.,30s
,1m
,5m
)--connections
: Concurrent connections--batches
: Number of test batches for statistical significance--frameworks
: Comma-separated list of frameworks to test (e.g.,flash,gin,gofiber
)--scenarios
: Comma-separated list of scenarios to run (e.g.,simple,json,param
)
After running benchmarks, you'll find detailed results in the results/
directory:
results/
├── 📊 2025-08-26/ # Date-based results directory
│ ├── 📈 summary.csv # Comprehensive comparison data
│ ├── 📋 parts/ # Individual framework results
│ ├── 🔍 raw/ # Raw benchmark outputs
│ └── 📁 images/ # Generated charts
└── 📁 previous-runs/ # Historical results
🔧 Optimization Recommendations
- Close unnecessary applications to reduce system noise
- Run multiple batches for statistical significance
- Use consistent system load across test runs
- Monitor system resources during benchmarks
- Light testing:
--requests 1000 --connections 10
- Standard testing:
--requests 10000 --connections 50
- Heavy testing:
--requests 100000 --connections 100
# Increase file descriptor limit (if needed)
ulimit -n 65536
# Check current limits
ulimit -a
Framework | Port | Health Check | Base URL |
---|---|---|---|
🔥 GoFlash | 17780 |
GET /ping |
http://localhost:17780 |
🍸 Gin | 17781 |
GET /ping |
http://localhost:17781 |
🕷️ Fiber | 17782 |
GET /ping |
http://localhost:17782 |
📢 Echo | 17783 |
GET /ping |
http://localhost:17783 |
🔗 Chi | 17784 |
GET /ping |
http://localhost:17784 |
Each server implements the following endpoints for benchmarking:
GET /ping # Simple ping/pong
GET /param/:id # URL parameter extraction
GET /context # Request context operations
POST /json # JSON binding & validation
GET /wildcard/*path # Wildcard route parsing
GET /api/v1/group/ping # Basic route group
GET /g1/g2/.../g10/ping # Deep nested groups (10 levels)
GET /mw/ping # Single middleware
GET /mw10/ping # 10 middleware chain
Customize benchmark execution with these parameters:
Parameter | Description | Default | Recommended Range |
---|---|---|---|
--requests |
Total requests per test | 10000 |
1K - 100K |
--connections |
Concurrent connections | 50 |
10 - 200 |
--batches |
Number of test batches | 3 |
1 - 10 |
--tool |
Benchmark tool | wrk |
wrk or ab |
The benchmark suite generates multiple output formats:
- 📈 CSV Data: Raw performance metrics for analysis
- 📊 Summary Reports: Aggregated results across scenarios
- 🔍 Detailed Logs: Individual test execution details
- 📁 Organized Structure: Date-based result directories
This benchmark suite is designed with modularity, atomicity, and accuracy in mind:
go-web-benchmarks/
├── 🚀 cmd/ # Command-line interface
├── 🔧 internal/ # Core framework logic
│ ├── config/ # Configuration management
│ ├── progress/ # Progress tracking
│ ├── runner/ # Benchmark execution
│ └── types/ # Data structures
├── 🏗️ frameworks/ # Framework implementations
│ ├── flash/ # GoFlash implementation
│ ├── gin/ # Gin framework implementation
│ ├── gofiber/ # Fiber framework implementation
│ ├── echo/ # Echo framework implementation
│ └── chi/ # Chi framework implementation
├── 📊 results/ # Performance data and charts
├── ⚙️ config.yaml # YAML configuration
└── 📋 README.md # This documentation
Our approach ensures fair and accurate comparisons:
- Equivalent Implementations: Each endpoint performs identical operations across frameworks
- Production Settings: All servers run in optimized production mode
- Isolated Processes: Frameworks run in separate processes to prevent interference
- Statistical Validity: Multiple test batches ensure reliable results
- Resource Monitoring: System resource usage tracked during tests
- Atomic Execution: Tests are atomic and can be resumed from failures
- Deterministic Results: Consistent execution environment and parameters
- RPS (Requests Per Second): Primary performance indicator
- Latency Distribution: Response time characteristics
- Memory Usage: Resource consumption patterns
- CPU Utilization: Processing efficiency
- Router Efficiency: How quickly routes are matched and resolved
- Middleware Overhead: Processing cost of request/response pipeline
- Memory Allocation: Garbage collection and memory management impact
- Serialization Speed: JSON encoding/decoding performance
🎯 Production-Level Load Testing Examples
# Ultimate stress test - 1 million requests per scenario, 10 statistical batches
go run ./cmd run --requests 1000000 --connections 100 --batches 10
# High-volume with all frameworks and scenarios (full comprehensive test)
go run ./cmd run --requests 1000000 --connections 200 --batches 10 --frameworks flash,gin,gofiber,echo,chi --scenarios simple,param,context,json,wildcard,groups,deepgroups,middleware,mw10
# Memory-intensive JSON processing test
go run ./cmd run --requests 500000 --connections 50 --batches 5 --scenarios json
# 1-minute duration tests with statistical significance
go run ./cmd run --duration 1m --connections 50 --batches 3
# Extended duration testing for stability analysis
go run ./cmd run --duration 5m --connections 100 --batches 5
# Quick 1-minute validation across all scenarios
go run ./cmd run --duration 1m --connections 25 --batches 1 --scenarios simple,json,param
# Progressive connection scaling
go run ./cmd run --duration 30s --connections 10 --batches 3 # Light load
go run ./cmd run --duration 30s --connections 50 --batches 3 # Medium load
go run ./cmd run --duration 30s --connections 200 --batches 3 # Heavy load
go run ./cmd run --duration 30s --connections 500 --batches 3 # Extreme load
# Framework comparison under different loads
go run ./cmd run --requests 100000 --connections 50 --frameworks flash,gin,gofiber
go run ./cmd run --requests 100000 --connections 200 --frameworks flash,gin,gofiber
The benchmark suite supports resuming from failed runs:
# Resume from last failed run
./benchmark run --resume
Test specific frameworks only:
# Test only GoFlash and Gin
go run ./cmd run --frameworks flash,gin
# Compare top 3 performers
go run ./cmd run --duration 1m --frameworks flash,gin,gofiber --batches 5
Test specific scenarios only:
# Test only simple and JSON scenarios
go run ./cmd run --scenarios simple,json
# Focus on API-heavy scenarios
go run ./cmd run --duration 1m --scenarios json,param,context --batches 3
# Test routing performance
go run ./cmd run --requests 50000 --scenarios simple,param,wildcard,groups,deepgroups
Override configuration parameters:
# Use ApacheBench instead of wrk
./benchmark run --tool ab
# Custom test duration
./benchmark run --duration 60s
We welcome contributions to improve the benchmark suite! Here's how you can help:
- Bug Reports: Use the GitHub issue tracker
- Feature Requests: Suggest new frameworks or scenarios
- Performance Issues: Report unexpected results
- Create Framework Directory: Add implementation in
frameworks/
- Update Configuration: Add framework to
config.yaml
- Implement Endpoints: Ensure all test scenarios are covered
- Test Thoroughly: Run benchmarks to verify results
- Define Scenario: Add to
config.yaml
scenarios section - Implement Handlers: Add endpoints to all frameworks
- Update Documentation: Document the new scenario
- Test Validation: Ensure consistent behavior across frameworks
# Run all tests
go test ./...
# Run specific package tests
go test ./internal/config
go test ./internal/runner
- Follow Go conventions and best practices
- Add comprehensive documentation
- Include unit tests for new functionality
- Ensure atomic and deterministic behavior
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ for the Go community
Accurate, reproducible, and meaningful performance benchmarks