Skip to content

Commit 47232b9

Browse files
Merge pull request #1 from CodingAnarchy/feature/web-dashboard
feat: Add comprehensive web dashboard for Hammerwork job queue monitoring
2 parents 6b720d3 + a68fa95 commit 47232b9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+7122
-217
lines changed

CHANGELOG.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,27 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [1.2.1] - 2025-06-30
9+
10+
### Fixed
11+
- **🐛 MySQL Query Field Completeness**
12+
- Fixed `Database(ColumnNotFound("trace_id"))` errors in MySQL dequeue operations
13+
- Updated MySQL `dequeue()` and `dequeue_with_priority_weights()` queries to include all tracing fields: `trace_id`, `correlation_id`, `parent_span_id`, `span_context`
14+
- Ensures JobRow struct mapping works correctly with all database schema fields added in migration 009_add_tracing.mysql.sql
15+
- Fixed two failing tests: `test_mysql_dequeue_includes_all_fields` and `test_mysql_dequeue_with_priority_weights_includes_all_fields`
16+
17+
### Enhanced
18+
- **🧪 Test Infrastructure Improvements**
19+
- Improved test isolation using unique queue names to prevent test interference
20+
- Fixed race conditions in result storage tests by implementing proper job completion polling
21+
- Enhanced test database setup to use migration-based approach ensuring schema consistency
22+
- Fixed 6 failing doctests in worker.rs by correcting async/await usage in documentation examples
23+
24+
### Technical Implementation
25+
- MySQL dequeue queries now SELECT all 34 fields required by JobRow struct mapping
26+
- Complete field list includes: id, queue_name, payload, status, priority, attempts, max_attempts, timeout_seconds, created_at, scheduled_at, started_at, completed_at, failed_at, timed_out_at, error_message, cron_schedule, next_run_at, recurring, timezone, batch_id, result_data, result_stored_at, result_expires_at, result_storage_type, result_ttl_seconds, result_max_size_bytes, depends_on, dependents, dependency_status, workflow_id, workflow_name, trace_id, correlation_id, parent_span_id, span_context
27+
- All tests now passing: 228 unit tests, 135 doctests, 0 failures
28+
829
## [1.2.0] - 2025-06-29
930

1031
### Added

Cargo.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,14 @@
22
members = [
33
".",
44
"cargo-hammerwork",
5+
"hammerwork-web",
56
"integrations/postgres-integration",
67
"integrations/mysql-integration",
78
]
89
resolver = "2"
910

1011
[workspace.package]
11-
version = "1.2.0"
12+
version = "1.2.1"
1213
edition = "2024"
1314
license = "MIT"
1415
repository = "https://github.com/CodingAnarchy/hammerwork"

README.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ A high-performance, database-driven job queue for Rust with comprehensive featur
44

55
## Features
66

7+
- **📊 Web Dashboard**: Modern real-time web interface for monitoring queues, managing jobs, and system administration with authentication and WebSocket updates
78
- **🔍 Job Tracing & Correlation**: Comprehensive distributed tracing with OpenTelemetry integration, trace IDs, correlation IDs, and lifecycle event hooks
89
- **🔗 Job Dependencies & Workflows**: Create complex data processing pipelines with job dependencies, sequential chains, and parallel processing with synchronization barriers
910
- **Multi-database support**: PostgreSQL and MySQL backends with optimized dependency queries
@@ -22,6 +23,8 @@ A high-performance, database-driven job queue for Rust with comprehensive featur
2223

2324
## Installation
2425

26+
### Core Library
27+
2528
```toml
2629
[dependencies]
2730
# Default features include metrics and alerting
@@ -38,13 +41,32 @@ hammerwork = { version = "1.2", features = ["postgres"], default-features = fals
3841

3942
**Feature Flags**: `postgres`, `mysql`, `metrics` (default), `alerting` (default), `tracing` (optional)
4043

44+
### Web Dashboard (Optional)
45+
46+
```bash
47+
# Install the web dashboard
48+
cargo install hammerwork-web --features postgres
49+
50+
# Or add to your project
51+
[dependencies]
52+
hammerwork-web = { version = "1.2", features = ["postgres"] }
53+
```
54+
55+
Start the dashboard:
56+
57+
```bash
58+
hammerwork-web --database-url postgresql://localhost/hammerwork
59+
# Dashboard available at http://localhost:8080
60+
```
61+
4162
## Quick Start
4263
4364
See the [Quick Start Guide](docs/quick-start.md) for complete examples with PostgreSQL and MySQL.
4465
4566
## Documentation
4667
4768
- **[Quick Start Guide](docs/quick-start.md)** - Get started with PostgreSQL and MySQL
69+
- **[Web Dashboard](hammerwork-web/README.md)** - Real-time web interface for queue monitoring and job management
4870
- **[Job Tracing & Correlation](docs/tracing.md)** - Distributed tracing, correlation IDs, and OpenTelemetry integration
4971
- **[Job Dependencies & Workflows](docs/workflows.md)** - Complex pipelines, job dependencies, and orchestration
5072
- **[Job Types & Configuration](docs/job-types.md)** - Job creation, priorities, timeouts, cron jobs
@@ -208,6 +230,37 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
208230
209231
This enables end-to-end tracing across your entire job processing pipeline with automatic span creation, correlation tracking, and integration with observability platforms like Jaeger, Zipkin, or DataDog.
210232
233+
## Web Dashboard
234+
235+
Start the real-time web dashboard for monitoring and managing your job queues:
236+
237+
```bash
238+
# Start with PostgreSQL
239+
hammerwork-web --database-url postgresql://localhost/hammerwork
240+
241+
# Start with authentication
242+
hammerwork-web \
243+
--database-url postgresql://localhost/hammerwork \
244+
--auth \
245+
--username admin \
246+
--password mypassword
247+
248+
# Start with custom configuration
249+
hammerwork-web --config dashboard.toml
250+
```
251+
252+
The dashboard provides:
253+
254+
- **Real-time Monitoring**: Live queue statistics, job counts, and throughput metrics
255+
- **Job Management**: View, retry, cancel, and inspect jobs with detailed payload information
256+
- **Queue Administration**: Clear queues, monitor performance, and manage priorities
257+
- **Interactive Charts**: Throughput graphs and job status distributions
258+
- **WebSocket Updates**: Real-time updates without page refresh
259+
- **REST API**: Complete programmatic access to all dashboard features
260+
- **Authentication**: Secure access with bcrypt password hashing and rate limiting
261+
262+
Access the dashboard at `http://localhost:8080` after starting the server.
263+
211264
## Database Setup
212265
213266
### Using Migrations (Recommended)
@@ -223,6 +276,9 @@ cargo hammerwork migrate --database-url postgresql://localhost/hammerwork
223276
224277
# Check migration status
225278
cargo hammerwork status --database-url postgresql://localhost/hammerwork
279+
280+
# Start the web dashboard after migrations
281+
hammerwork-web --database-url postgresql://localhost/hammerwork
226282
```
227283
228284
### Application Usage

ROADMAP.md

Lines changed: 0 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -2,45 +2,6 @@
22

33
This roadmap outlines planned features for Hammerwork, prioritized by impact level and implementation complexity. Features are organized into phases based on their value proposition to users and estimated development effort.
44

5-
## Phase 1: High Impact, Medium-High Complexity
6-
*Features that provide significant value but require more substantial implementation effort*
7-
8-
### 🌐 Admin Dashboard & CLI Tools
9-
**Impact: High** | **Complexity: Medium-High** | **Priority: Medium**
10-
11-
Critical for operational management and developer productivity.
12-
13-
**✅ CLI Tools - COMPLETED v1.1.0+**
14-
The `cargo-hammerwork` CLI provides comprehensive job queue management:
15-
- Database migrations and setup
16-
- Job management (enqueue, list, retry, cancel, inspect)
17-
- Queue operations (list, clear, statistics)
18-
- Worker management and monitoring
19-
- Batch operations and cron scheduling
20-
- Workflow visualization and dependency management
21-
- Real-time monitoring and health checks
22-
23-
```bash
24-
# Available CLI commands
25-
cargo hammerwork migration run
26-
cargo hammerwork job list --queue email
27-
cargo hammerwork queue stats
28-
cargo hammerwork worker start --queue default
29-
cargo hammerwork monitor dashboard
30-
```
31-
32-
**🚧 Web Dashboard - REMAINING**
33-
Web-based admin interface for visual queue management:
34-
35-
```rust
36-
// Future web-based admin interface
37-
let admin_server = AdminServer::new()
38-
.with_queue_monitoring()
39-
.with_job_management()
40-
.with_worker_controls()
41-
.bind("127.0.0.1:8080");
42-
```
43-
445
## Phase 2: Medium Impact, Variable Complexity
456
*Valuable features for specific use cases or operational efficiency*
467

cargo-hammerwork/src/commands/workflow.rs

Lines changed: 13 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -528,28 +528,27 @@ impl WorkflowCommand {
528528

529529
for job in jobs {
530530
if !visited.contains(&job.id) {
531-
self.calculate_job_level(job, &job_map, &mut levels, &mut visited, 0);
531+
Self::calculate_job_level(job, &job_map, &mut levels, &mut visited, 0);
532532
}
533533
}
534534

535535
// Group jobs by level
536536
let mut result: HashMap<usize, Vec<&JobNode>> = HashMap::new();
537537
for (job_id, level) in levels {
538538
if let Some(job) = job_map.get(&job_id) {
539-
result.entry(level).or_insert_with(Vec::new).push(job);
539+
result.entry(level).or_default().push(job);
540540
}
541541
}
542542

543543
result.into_iter().collect()
544544
}
545545

546546
fn calculate_job_level(
547-
&self,
548547
job: &JobNode,
549548
job_map: &HashMap<String, &JobNode>,
550549
levels: &mut HashMap<String, usize>,
551550
visited: &mut HashSet<String>,
552-
current_level: usize,
551+
_current_level: usize,
553552
) {
554553
if visited.contains(&job.id) {
555554
return;
@@ -564,7 +563,13 @@ impl WorkflowCommand {
564563
.filter_map(|dep_id| {
565564
if let Some(dep_job) = job_map.get(dep_id) {
566565
if !visited.contains(dep_id) {
567-
self.calculate_job_level(dep_job, job_map, levels, visited, current_level);
566+
Self::calculate_job_level(
567+
dep_job,
568+
job_map,
569+
levels,
570+
visited,
571+
_current_level,
572+
);
568573
}
569574
levels.get(dep_id).copied()
570575
} else {
@@ -803,21 +808,20 @@ impl WorkflowCommand {
803808

804809
for root in &roots {
805810
if !visited.contains(&root.id) {
806-
self.print_job_tree_node(root, &job_map, &mut visited, 0, target_job_id);
811+
Self::print_job_tree_node(root, &job_map, &mut visited, 0, target_job_id);
807812
}
808813
}
809814

810815
// Handle any remaining unvisited jobs (cycles or disconnected components)
811816
for job in jobs {
812817
if !visited.contains(&job.id) {
813818
println!(" [Disconnected]");
814-
self.print_job_tree_node(job, &job_map, &mut visited, 0, target_job_id);
819+
Self::print_job_tree_node(job, &job_map, &mut visited, 0, target_job_id);
815820
}
816821
}
817822
}
818823

819824
fn print_job_tree_node(
820-
&self,
821825
job: &JobNode,
822826
job_map: &HashMap<String, &JobNode>,
823827
visited: &mut HashSet<String>,
@@ -847,7 +851,7 @@ impl WorkflowCommand {
847851
for dependent_id in &job.dependents {
848852
if let Some(dependent_job) = job_map.get(dependent_id) {
849853
if !visited.contains(dependent_id) {
850-
self.print_job_tree_node(
854+
Self::print_job_tree_node(
851855
dependent_job,
852856
job_map,
853857
visited,

cargo-hammerwork/src/config/mod.rs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -118,16 +118,16 @@ use std::path::PathBuf;
118118
pub struct Config {
119119
/// Database connection URL (e.g., "postgresql://localhost/hammerwork")
120120
pub database_url: Option<String>,
121-
121+
122122
/// Default queue name for operations
123123
pub default_queue: Option<String>,
124-
124+
125125
/// Default limit for list operations
126126
pub default_limit: Option<u32>,
127-
127+
128128
/// Log level (error, warn, info, debug, trace)
129129
pub log_level: Option<String>,
130-
130+
131131
/// Database connection pool size
132132
pub connection_pool_size: Option<u32>,
133133
}

cargo-hammerwork/src/utils/database.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
//!
2222
//! // Connect to MySQL
2323
//! let mysql_pool = DatabasePool::connect(
24-
//! "mysql://localhost/hammerwork",
24+
//! "mysql://localhost/hammerwork",
2525
//! 5
2626
//! ).await?;
2727
//! # Ok(())

cargo-hammerwork/src/utils/display.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ use std::fmt;
7070
/// use cargo_hammerwork::utils::display::JobTable;
7171
///
7272
/// let mut table = JobTable::new();
73-
///
73+
///
7474
/// // Add multiple jobs
7575
/// table.add_job_row(
7676
/// "job-id-1", "email", "pending", "normal", 0,
@@ -80,7 +80,7 @@ use std::fmt;
8080
/// "job-id-2", "data-processing", "running", "high", 1,
8181
/// "2024-01-01 09:55:00", "2024-01-01 10:00:00"
8282
/// );
83-
///
83+
///
8484
/// // The table will display with color-coded status and priority
8585
/// ```
8686
pub struct JobTable {
@@ -189,7 +189,7 @@ impl fmt::Display for JobTable {
189189
/// stats.add_stats_row("pending", "normal", 100);
190190
/// stats.add_stats_row("running", "high", 10);
191191
/// stats.add_stats_row("failed", "critical", 2);
192-
///
192+
///
193193
/// // Display shows icons: 🟡 pending, 🔵 running, 🔴 failed
194194
/// ```
195195
pub struct StatsTable {

cargo-hammerwork/tests/sql_query_tests.rs

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ use sqlx::{MySqlPool, PgPool, Row};
44
/// Tests for SQL query validation and correctness
55
/// These tests validate that our dynamic SQL queries are syntactically correct
66
/// and produce expected results
7-
87
#[cfg(test)]
98
mod postgres_tests {
109
use super::*;

examples/autoscaling_example.rs

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,17 +25,23 @@ async fn main() -> Result<()> {
2525
// Uncomment the appropriate line below based on your database setup
2626

2727
// For PostgreSQL:
28-
#[cfg(feature = "postgres")]
28+
#[cfg(all(feature = "postgres", not(feature = "mysql")))]
2929
let pool = sqlx::PgPool::connect("postgresql://localhost/hammerwork")
3030
.await
3131
.map_err(hammerwork::HammerworkError::Database)?;
3232

3333
// For MySQL:
34-
#[cfg(feature = "mysql")]
34+
#[cfg(all(feature = "mysql", not(feature = "postgres")))]
3535
let pool = sqlx::MySqlPool::connect("mysql://localhost/hammerwork")
3636
.await
3737
.map_err(hammerwork::HammerworkError::Database)?;
3838

39+
// Default to PostgreSQL when both features are enabled
40+
#[cfg(all(feature = "postgres", feature = "mysql"))]
41+
let pool = sqlx::PgPool::connect("postgresql://localhost/hammerwork")
42+
.await
43+
.map_err(hammerwork::HammerworkError::Database)?;
44+
3945
let queue = Arc::new(JobQueue::new(pool));
4046

4147
// Initialize database tables

0 commit comments

Comments
 (0)