Skip to content

Commit 1f4aeab

Browse files
CodingAnarchyclaude
andcommitted
Release v0.8.0: Job Result Storage and Simplified Migration System
### Major Features Added - **Job Result Storage**: Store and retrieve job execution results with TTL support - ResultStorage enum (Database/Memory/None) for flexible storage strategies - JobHandlerWithResult type for handlers returning result data - Automatic result storage in worker processing loop - TTL support with automatic expiration and cleanup - Comprehensive test coverage and examples - **Simplified Migration System**: Cargo subcommand-only approach - cargo hammerwork migrate/status commands for schema management - 6 versioned migrations covering Hammerwork evolution (v0.1.0-v0.8.0) - Database-specific optimizations (PostgreSQL JSONB vs MySQL JSON) - Migration tracking table for execution history ### Breaking Changes - Removed create_tables() and run_migrations() methods from DatabaseQueue - Removed standalone migrate binary - All examples now require running cargo hammerwork migrate first - Database setup exclusively via cargo subcommand for cleaner separation ### Enhanced Architecture - Split monolithic queue.rs into separate PostgreSQL/MySQL modules - Cleaner DatabaseQueue trait focused on job operations only - Production-ready workflow: migrations during deployment, not startup - Standard Rust ecosystem practices with cargo subcommand 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
1 parent 98ce0cf commit 1f4aeab

36 files changed

+5445
-2561
lines changed

CHANGELOG.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,91 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [0.8.0] - 2025-06-27
9+
10+
### Added
11+
- **🔧 Cargo Subcommand for Database Migrations**
12+
- `cargo hammerwork migrate` command for running database migrations
13+
- `cargo hammerwork status` command for checking migration status
14+
- Progressive schema evolution with versioned migrations (001-006)
15+
- Database-specific optimizations (PostgreSQL JSONB vs MySQL JSON)
16+
- Migration tracking table for execution history and rollback safety
17+
- **💾 Comprehensive Job Result Storage**
18+
- `ResultStorage` enum with `Database`, `Memory`, and `None` options for flexible result storage strategies
19+
- `ResultConfig` struct with TTL support, max size limits, and configurable storage backends
20+
- `JobResult` struct for structured result data returned by enhanced job handlers
21+
- Automatic result storage when jobs complete successfully with configurable expiration
22+
- Result retrieval, deletion, and cleanup operations integrated into the DatabaseQueue trait
23+
24+
- **🔧 Enhanced Job Handler System**
25+
- `JobHandlerWithResult` type for handlers that return result data alongside success/failure status
26+
- Dual handler system maintaining 100% backward compatibility with existing `JobHandler` implementations
27+
- `JobResult::success()` and `JobResult::with_data()` constructors for flexible result creation
28+
- Automatic result storage integration in worker processing loop
29+
30+
- **🗄️ Database Schema and Implementation Enhancements**
31+
- Added `result_data`, `result_stored_at`, `result_expires_at` columns to job tables
32+
- PostgreSQL implementation using JSONB for efficient result storage and queries
33+
- MySQL implementation using JSON with string-based UUID handling
34+
- Split monolithic queue implementation into separate PostgreSQL and MySQL modules for better maintainability
35+
- Optimized queries for result storage, retrieval, and expiration cleanup
36+
37+
- **⚡ Advanced Result Management API**
38+
- `store_job_result()` - Store result data with optional TTL expiration
39+
- `get_job_result()` - Retrieve stored results with automatic expiration checking
40+
- `delete_job_result()` - Manual result cleanup
41+
- `cleanup_expired_results()` - Batch cleanup of expired results returning count
42+
- TTL support with automatic expiration based on configurable time-to-live settings
43+
44+
- **👷 Worker Integration and Compatibility**
45+
- `Worker::new_with_result_handler()` constructor for enhanced result-storing workers
46+
- Automatic result storage when jobs complete successfully (respects job configuration)
47+
- Legacy handler compatibility - existing workers continue to work unchanged
48+
- `JobHandlerType` enum for internal handler type management and routing
49+
50+
- **🧪 Comprehensive Testing Suite**
51+
- 8 new result storage tests covering all functionality across PostgreSQL and MySQL
52+
- Worker integration tests using WorkerPool approach for realistic testing scenarios
53+
- Result expiration and TTL testing with time-based validation
54+
- Legacy handler compatibility testing ensuring backward compatibility
55+
- Configuration testing for all result storage modes and settings
56+
57+
- **📖 Documentation and Examples**
58+
- Complete `result_storage_example.rs` demonstrating all result storage features
59+
- Database-specific implementations to handle complex generic constraints
60+
- Basic storage, enhanced workers, result expiration, and legacy compatibility examples
61+
- Visual feedback using emoji indicators for clear demonstration output
62+
- Comprehensive documentation for result storage configuration and usage
63+
64+
### Enhanced
65+
- **Job Structure**: Added result configuration fields with builder methods
66+
- `with_result_storage()` - Configure storage backend (Database, Memory, None)
67+
- `with_result_ttl()` - Set time-to-live for result expiration
68+
- `with_result_config()` - Apply complete result configuration
69+
- **Database Queue**: Extended trait with result storage operations
70+
- **Library Exports**: Added new result storage types to public API
71+
- **Architecture**: Modularized queue implementations for better code organization
72+
73+
### Removed (Breaking Changes)
74+
- **BREAKING**: Removed `create_tables()` method from DatabaseQueue trait
75+
- **BREAKING**: Removed `run_migrations()` method from DatabaseQueue trait
76+
- **BREAKING**: Removed standalone `migrate` binary
77+
- **BREAKING**: All examples now require running `cargo hammerwork migrate` before use
78+
- Database setup is now exclusively handled by the cargo subcommand for better separation of concerns
79+
80+
### Enhanced
81+
- **Simplified API**: Cleaner DatabaseQueue trait focused on job operations only
82+
- **Standard Workflow**: Database migrations follow Rust ecosystem conventions
83+
- **Production Ready**: Migrations run during deployment, not application startup
84+
- **Better Testing**: Database setup is external to application code
85+
86+
### Technical Implementation
87+
- **Single Source of Truth**: Only one way to run migrations (`cargo hammerwork migrate`)
88+
- **Idempotent Operations**: Safe to run migrations multiple times
89+
- **Progressive Schema**: 6 versioned migrations covering Hammerwork's evolution (v0.1.0 to v0.8.0)
90+
- **Database Optimizations**: PostgreSQL JSONB vs MySQL JSON with appropriate indexing
91+
- **Migration Tracking**: Comprehensive execution history and rollback safety
92+
893
## [0.7.1] - 2025-06-27
994

1095
### Fixed

CLAUDE.md

Lines changed: 118 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,4 +103,121 @@ Both PostgreSQL and MySQL implementations use a single table `hammerwork_jobs` w
103103
- **Transaction safety**: Uses database transactions for job state changes
104104
- **Type-safe job handling**: Job handlers return `Result<()>` for error handling
105105
- **Priority-aware scheduling**: Weighted and strict priority algorithms prevent starvation
106-
- **Comprehensive monitoring**: Statistics track priority distribution and performance
106+
- **Comprehensive monitoring**: Statistics track priority distribution and performance
107+
108+
## Rust Library Development Best Practices
109+
110+
### Pre-Commit Quality Checks (REQUIRED)
111+
112+
**ALWAYS run these commands before committing and pushing any changes:**
113+
114+
```bash
115+
# 1. Format code according to Rust standards
116+
cargo fmt
117+
118+
# 2. Run all linting checks with strict warnings
119+
cargo clippy --all-features -- -D warnings
120+
121+
# 3. Run complete test suite with all features
122+
cargo test --all-features
123+
124+
# 4. Verify examples compile and work
125+
cargo check --examples --all-features
126+
127+
# 5. Build release version to catch optimization issues
128+
cargo build --release --all-features
129+
```
130+
131+
### Code Quality Standards
132+
133+
1. **Code Formatting**
134+
- Use `cargo fmt` for consistent formatting
135+
- Follow Rust naming conventions (snake_case, PascalCase, SCREAMING_SNAKE_CASE)
136+
- Keep line length reasonable (under 100 characters when possible)
137+
138+
2. **Documentation**
139+
- Document all public APIs with `///` doc comments
140+
- Include examples in doc comments where helpful
141+
- Update CHANGELOG.md for all user-facing changes
142+
- Maintain CLAUDE.md for development guidance
143+
144+
3. **Testing Strategy**
145+
- Write unit tests for all core functionality
146+
- Include integration tests for database operations
147+
- Test with both PostgreSQL and MySQL features
148+
- Cover edge cases and error conditions
149+
- Use `#[ignore]` for tests requiring database connections
150+
151+
4. **Error Handling**
152+
- Use `thiserror` for structured error types
153+
- Provide meaningful error messages
154+
- Avoid unwrap() in library code
155+
- Propagate errors using `?` operator
156+
157+
5. **Dependencies & Features**
158+
- Keep dependencies minimal and well-justified
159+
- Use feature flags for optional functionality
160+
- Ensure backward compatibility when possible
161+
- Document MSRV (Minimum Supported Rust Version)
162+
163+
6. **Performance Considerations**
164+
- Profile database queries for efficiency
165+
- Use appropriate indexes in schema
166+
- Minimize allocations in hot paths
167+
- Test with realistic data volumes
168+
169+
### Release Process
170+
171+
1. **Version Bumping**
172+
- Follow Semantic Versioning (semver)
173+
- Update version in Cargo.toml
174+
- Update CHANGELOG.md with release notes
175+
- Tag releases with `git tag vX.Y.Z`
176+
177+
2. **Testing Before Release**
178+
- Run full test suite: `cargo test --all-features`
179+
- Test examples: `cargo run --example <name> --features <db>`
180+
- Verify documentation: `cargo doc --all-features`
181+
- Check packaging: `cargo package`
182+
183+
3. **Publishing**
184+
- Commit all changes with descriptive messages
185+
- Push to origin: `git push origin main && git push origin --tags`
186+
- Publish to crates.io: `cargo publish`
187+
188+
### Database Development Guidelines
189+
190+
1. **Schema Changes**
191+
- Always provide migration paths
192+
- Test with both PostgreSQL and MySQL
193+
- Consider backward compatibility
194+
- Document schema changes in CHANGELOG
195+
196+
2. **Query Optimization**
197+
- Use proper indexes for job polling
198+
- Leverage database-specific features appropriately
199+
- Test query performance under load
200+
- Use `EXPLAIN` to analyze query plans
201+
202+
3. **Feature Parity**
203+
- Maintain functional equivalence between database backends
204+
- Handle database-specific limitations gracefully
205+
- Test all features with both databases
206+
207+
### Git Workflow
208+
209+
1. **Commit Messages**
210+
- Use conventional commit format when appropriate
211+
- Include clear, descriptive titles
212+
- Reference issues/PRs when relevant
213+
- Add co-authorship for AI-assisted development
214+
215+
2. **Branch Management**
216+
- Use descriptive branch names
217+
- Keep commits focused and atomic
218+
- Squash commits before merging when appropriate
219+
220+
3. **Code Reviews**
221+
- Review for correctness, performance, and maintainability
222+
- Ensure tests pass on all supported platforms
223+
- Verify documentation is updated

Cargo.toml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,11 @@ rand = "0.8"
2525
prometheus = { version = "0.13" }
2626
reqwest = { version = "0.12", features = ["json"] }
2727
warp = { version = "0.3" }
28+
clap = { version = "4.0", features = ["derive"] }
2829

2930
[package]
3031
name = "hammerwork"
31-
version = "0.7.1"
32+
version = "0.8.0"
3233
edition = "2024"
3334
description = "A high-performance, database-driven job queue for Rust with PostgreSQL and MySQL support, featuring job prioritization, cron scheduling, timeouts, rate limiting, Prometheus metrics, alerting, and comprehensive statistics collection"
3435
license = "MIT"
@@ -57,6 +58,8 @@ rand = { workspace = true }
5758
prometheus = { version = "0.13", optional = true }
5859
reqwest = { version = "0.12", features = ["json"], optional = true }
5960
warp = { version = "0.3", optional = true }
61+
clap = { workspace = true }
62+
tracing-subscriber = { workspace = true }
6063

6164
[features]
6265
default = ["metrics", "alerting"]
@@ -84,3 +87,7 @@ required-features = ["postgres"]
8487
[[example]]
8588
name = "priority_example"
8689
required-features = ["postgres"]
90+
91+
[[bin]]
92+
name = "cargo-hammerwork"
93+
path = "src/bin/cargo-hammerwork.rs"

README.md

Lines changed: 40 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ See the [Quick Start Guide](docs/quick-start.md) for complete examples with Post
4242
- **[Cron Scheduling](docs/cron-scheduling.md)** - Recurring jobs with timezone support
4343
- **[Priority System](docs/priority-system.md)** - Five-level priority system with weighted scheduling
4444
- **[Batch Operations](docs/batch-operations.md)** - High-performance bulk job processing
45+
- **[Database Migrations](docs/migrations.md)** - Progressive schema updates and database setup
4546
- **[Monitoring & Alerting](docs/monitoring.md)** - Prometheus metrics and notification systems
4647

4748
## Basic Example
@@ -53,10 +54,9 @@ use std::sync::Arc;
5354

5455
#[tokio::main]
5556
async fn main() -> Result<(), Box<dyn std::error::Error>> {
56-
// Setup database and queue
57+
// Setup database and queue (migrations should already be run)
5758
let pool = sqlx::PgPool::connect("postgresql://localhost/mydb").await?;
5859
let queue = Arc::new(JobQueue::new(pool));
59-
queue.create_tables().await?;
6060

6161
// Create job handler
6262
let handler = Arc::new(|job: Job| {
@@ -83,9 +83,45 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
8383

8484

8585

86-
## Database Schema
86+
## Database Setup
8787

88-
Hammerwork uses a single `hammerwork_jobs` table with optimized indexes for performance. The schema supports all features including priorities, timeouts, cron scheduling, and comprehensive job lifecycle tracking.
88+
### Using Migrations (Recommended)
89+
90+
Hammerwork provides a migration system for progressive schema updates:
91+
92+
```bash
93+
# Build the migration tool
94+
cargo build --bin cargo-hammerwork --features postgres
95+
96+
# Run migrations
97+
cargo hammerwork migrate --database-url postgresql://localhost/hammerwork
98+
99+
# Check migration status
100+
cargo hammerwork status --database-url postgresql://localhost/hammerwork
101+
```
102+
103+
### Application Usage
104+
105+
Once migrations are run, your application can use the queue directly:
106+
107+
```rust
108+
// In your application - no setup needed, just use the queue
109+
let pool = sqlx::PgPool::connect("postgresql://localhost/hammerwork").await?;
110+
let queue = Arc::new(JobQueue::new(pool));
111+
112+
// Start enqueuing jobs immediately
113+
let job = Job::new("default".to_string(), json!({"task": "send_email"}));
114+
queue.enqueue(job).await?;
115+
```
116+
117+
### Database Schema
118+
119+
Hammerwork uses optimized tables with comprehensive indexing:
120+
- **`hammerwork_jobs`** - Main job table with priorities, timeouts, cron scheduling, and result storage
121+
- **`hammerwork_batches`** - Batch metadata and tracking (v0.7.0+)
122+
- **`hammerwork_migrations`** - Migration tracking for schema evolution
123+
124+
The schema supports all features including job prioritization, timeouts, cron scheduling, batch processing, result storage, and comprehensive lifecycle tracking. See [Database Migrations](docs/migrations.md) for details.
89125

90126
## Development
91127

0 commit comments

Comments
 (0)