Skip to content

Commit 6339ce0

Browse files
CodingAnarchyclaude
andcommitted
Add comprehensive job batching and bulk operations support
This implementation adds high-performance batch processing capabilities to Hammerwork, addressing the top priority roadmap item with significant performance improvements for high-throughput scenarios. ## Key Features Added ### Core Batch Infrastructure - JobBatch struct with configurable batch settings and failure handling - PartialFailureMode enum (ContinueOnError, FailFast, CollectErrors) - BatchStatus enum for tracking batch progress - BatchResult struct for comprehensive batch monitoring ### Database Backend Implementation - PostgreSQL: Optimized bulk INSERT using UNNEST arrays for maximum performance - MySQL: Multi-row VALUES syntax with chunking for MySQL compatibility - New hammerwork_batches table for batch metadata tracking - batch_id column added to hammerwork_jobs table ### API Enhancements - DatabaseQueue trait extended with batch operations: - enqueue_batch() for bulk job insertion - get_batch_status() for progress monitoring - get_batch_jobs() for batch job retrieval - delete_batch() for cleanup operations ### Comprehensive Testing - Full test suite in tests/batch_tests.rs covering all batch functionality - Database-agnostic tests for JobBatch struct behavior - PostgreSQL and MySQL specific integration tests - Edge case testing for validation and error handling ### Documentation and Examples - Complete batch_example.rs demonstrating real-world usage patterns - Updated README.md highlighting batch operation capabilities - Comprehensive inline documentation with doctests ## Technical Implementation The batch system is designed for optimal performance: - PostgreSQL uses UNNEST for true bulk operations - MySQL uses prepared statements with chunking - Atomic database transactions ensure consistency - Configurable batch sizes prevent memory issues - Extensive validation prevents common errors ## Performance Impact This feature enables significant performance improvements: - Single database transaction for entire batch - Reduced network round-trips for large job sets - Optimized database queries with bulk operations - Configurable chunking for memory efficiency 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent 866a65d commit 6339ce0

File tree

8 files changed

+1811
-7
lines changed

8 files changed

+1811
-7
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ A high-performance, database-driven job queue for Rust with comprehensive featur
66

77
- **Multi-database support**: PostgreSQL and MySQL backends
88
- **Job prioritization**: Five priority levels with weighted and strict scheduling algorithms
9+
- **Batch operations**: High-performance bulk job enqueuing for improved throughput
910
- **Cron scheduling**: Full cron expression support with timezone awareness
1011
- **Rate limiting**: Token bucket rate limiting with configurable burst limits
1112
- **Monitoring**: Prometheus metrics and advanced alerting (enabled by default)

0 commit comments

Comments
 (0)