Skip to content

feat(queue): add queue introspection for console dashboard#136

Open
rohitg00 wants to merge 5 commits intomainfrom
rohit/queue-dashboard-introspection
Open

feat(queue): add queue introspection for console dashboard#136
rohitg00 wants to merge 5 commits intomainfrom
rohit/queue-dashboard-introspection

Conversation

@rohitg00
Copy link
Contributor

@rohitg00 rohitg00 commented Feb 12, 2026

Summary

  • Add lrange and list_keys_with_prefix methods to QueueKvStore for paginated job listing and queue discovery
  • Add introspection methods to BuiltinQueue: list_queues, waiting_count, active_count, delayed_count, list_jobs_in_state, get_job_by_id
  • Extend QueueAdapter trait with 4 new methods (list_queues, queue_stats, list_jobs, get_job) with default not-supported fallbacks
  • Implement trait methods on BuiltinQueueAdapter delegating to BuiltinQueue
  • Register 4 engine functions on QueueCoreModule: list_queues, stats, jobs, job (callable as queue.list_queues, etc.)

Test plan

  • cargo build compiles without errors
  • cargo test passes existing tests
  • Start engine, enqueue test jobs, verify queue.list_queues returns queue names with counts
  • Verify queue.stats returns correct waiting/active/delayed/dlq counts
  • Verify queue.jobs returns paginated job lists per state
  • Verify queue.job returns single job detail by ID
  • Verify non-builtin adapters (RabbitMQ, Redis) compile with default trait implementations

Summary by CodeRabbit

  • New Features
    • View all queues with statistics (waiting, active, delayed job counts)
    • Retrieve detailed statistics for a specific queue
    • List and paginate through jobs by state (waiting, active, delayed, dead letter)
    • Inspect individual job details by ID
    • Manage dead letter queue: check count and redrive jobs back to waiting state

Add queue introspection methods to support the console queue
dashboard. Extends QueueKvStore with lrange, list_keys_with_prefix,
and zcount. Adds list_queues, queue_stats, list_jobs, and get_job
to the QueueAdapter trait with default implementations. Implements
on BuiltinQueueAdapter and registers as engine functions including
redrive_dlq and dlq_count.
@rohitg00 rohitg00 force-pushed the rohit/queue-dashboard-introspection branch from 645bece to 8fc340b Compare February 12, 2026 14:44
rohitg00 and others added 3 commits February 12, 2026 14:44
Address review findings: validate queue names (alphanumeric + -_.:,
max 128 chars), job IDs (max 256 chars), and job states against
allowed values. Clamp limit to max 500. Add tracing::warn for
destructive redrive_dlq calls.
@coderabbitai
Copy link

coderabbitai bot commented Feb 23, 2026

📝 Walkthrough

Walkthrough

This PR extends the queue system with introspection and job retrieval capabilities across multiple architectural layers. It adds six new async methods to BuiltinQueue for listing queues and jobs by state with pagination and count operations. Supporting KV store methods for range queries, prefix-based key enumeration, and sorted set counting are introduced. A trait-level QueueAdapter abstraction defines four new optional methods with default "not supported" errors. These propagate through the adapter implementation and surface as six new public QueueCoreModule functions with input validation and error handling.

Changes

Cohort / File(s) Summary
KV Store Layer
src/builtins/queue_kv.rs
Added lrange for list range retrieval with negative index support, list_keys_with_prefix for prefix-based key enumeration, and zcount for sorted set member counting within score ranges.
BuiltinQueue API
src/builtins/queue.rs
Added list_queues to retrieve all queue names, waiting_count/active_count/delayed_count for job state counts, list_jobs_in_state for paginated job retrieval by state with DLQ deserialization, and get_job_by_id for single job lookup.
QueueAdapter Trait
src/modules/queue/mod.rs
Added four new optional trait methods (list_queues, queue_stats, list_jobs, get_job) with default "not supported" error implementations to the trait definition.
BuiltinQueueAdapter Implementation
src/modules/queue/adapters/builtin/adapter.rs
Implemented the four trait methods, delegating to underlying BuiltinQueue with aggregated per-queue metrics for stats operations.
QueueCoreModule Public API
src/modules/queue/queue.rs
Added validation helpers for queue names, job IDs, and job states; introduced three input structs (QueueStatsInput, QueueJobsInput, QueueJobInput) with deserialization; exposed six new public async functions (list_queues, stats, jobs, job, redrive_dlq, dlq_count) with error mapping.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • guibeira

Poem

🐰 New queues lined up, all neat and bright,
With lists and stats, we've got insight,
Jobs by the state, from first to last,
Through layers of code, the details fast! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.21% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main objective: adding queue introspection capabilities specifically for console dashboard support, which aligns with all the implemented changes across multiple modules.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rohit/queue-dashboard-introspection

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/modules/queue/adapters/builtin/adapter.rs (1)

197-219: Consider potential performance impact with many queues.

The list_queues implementation makes 4 async calls per queue (waiting_count, active_count, delayed_count, dlq_count). For dashboards with many queues, this could introduce latency. This is acceptable for console dashboard use cases, but worth noting if queue counts grow significantly.

If performance becomes a concern in the future, consider adding a batch stats method to BuiltinQueue that returns all counts in a single operation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/queue/adapters/builtin/adapter.rs` around lines 197 - 219,
list_queues currently awaits four separate async calls per queue (waiting_count,
active_count, delayed_count, dlq_count) which will scale poorly; add a batch
stats API on the BuiltinQueue (e.g., a method like queue_stats or counts_for
that returns a struct with waiting/active/delayed/dlq totals) and update
list_queues to call that single async method per queue (replace the four awaits
with one await for queue.queue_stats(&name).await and map fields into the JSON),
so counts are fetched in one operation; alternatively, if you cannot change
BuiltinQueue now, run the four count futures concurrently using futures::join or
join_all and then assemble the result in list_queues.
src/builtins/queue.rs (1)

695-700: Delayed job pagination is inefficient for large queues.

The current implementation fetches all delayed jobs with zrangebyscore and then performs in-memory pagination with skip(offset).take(limit). For queues with many delayed jobs, this loads all job IDs into memory before pagination.

Consider adding a zrange method with offset/count support to QueueKvStore for more efficient pagination, similar to how lrange works for lists. This would allow fetching only the needed slice directly from the sorted set.

💡 Suggested improvement for delayed job pagination

Add a zrange method to QueueKvStore that supports offset-based pagination:

// In queue_kv.rs
pub async fn zrange(&self, key: &str, start: usize, stop: usize) -> Vec<String> {
    let sorted_sets = self.sorted_sets.read().await;
    let Some(set) = sorted_sets.get(key) else {
        return vec![];
    };

    set.iter()
        .flat_map(|(_, members)| members.iter().cloned())
        .skip(start)
        .take(stop - start + 1)
        .collect()
}

Then use it in list_jobs_in_state:

 "delayed" => {
-    let all = self
-        .kv_store
-        .zrangebyscore(&self.delayed_key(queue), 0, i64::MAX)
-        .await;
-    all.into_iter().skip(offset).take(limit).collect()
+    self.kv_store
+        .zrange(&self.delayed_key(queue), offset, offset + limit - 1)
+        .await
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/builtins/queue.rs` around lines 695 - 700, The delayed-job branch in
list_jobs_in_state currently calls
self.kv_store.zrangebyscore(&self.delayed_key(queue), 0, i64::MAX) and then does
in-memory pagination with skip/take, which loads all IDs; add a zrange(start,
count) method to QueueKvStore (analogous to lrange) and replace the
zrangebyscore call in list_jobs_in_state's "delayed" arm to call
self.kv_store.zrange(&self.delayed_key(queue), offset, limit) so only the
requested slice is fetched; ensure the new zrange honors ordering used by the
sorted set and returns Vec<String> of job IDs to keep compatibility with the
rest of the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/modules/queue/queue.rs`:
- Around line 197-224: The response currently returns the original user-provided
limit (input.limit) instead of the clamped value; in the jobs function update
the success payload to use the clamped limit variable (computed with
input.limit.min(MAX_LIMIT)) when setting the "limit" field in the JSON response
so API consumers see the actual effective limit returned by list_jobs.

---

Nitpick comments:
In `@src/builtins/queue.rs`:
- Around line 695-700: The delayed-job branch in list_jobs_in_state currently
calls self.kv_store.zrangebyscore(&self.delayed_key(queue), 0, i64::MAX) and
then does in-memory pagination with skip/take, which loads all IDs; add a
zrange(start, count) method to QueueKvStore (analogous to lrange) and replace
the zrangebyscore call in list_jobs_in_state's "delayed" arm to call
self.kv_store.zrange(&self.delayed_key(queue), offset, limit) so only the
requested slice is fetched; ensure the new zrange honors ordering used by the
sorted set and returns Vec<String> of job IDs to keep compatibility with the
rest of the code.

In `@src/modules/queue/adapters/builtin/adapter.rs`:
- Around line 197-219: list_queues currently awaits four separate async calls
per queue (waiting_count, active_count, delayed_count, dlq_count) which will
scale poorly; add a batch stats API on the BuiltinQueue (e.g., a method like
queue_stats or counts_for that returns a struct with waiting/active/delayed/dlq
totals) and update list_queues to call that single async method per queue
(replace the four awaits with one await for queue.queue_stats(&name).await and
map fields into the JSON), so counts are fetched in one operation;
alternatively, if you cannot change BuiltinQueue now, run the four count futures
concurrently using futures::join or join_all and then assemble the result in
list_queues.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 43ad6cd and f3d5f57.

📒 Files selected for processing (5)
  • src/builtins/queue.rs
  • src/builtins/queue_kv.rs
  • src/modules/queue/adapters/builtin/adapter.rs
  • src/modules/queue/mod.rs
  • src/modules/queue/queue.rs

Comment on lines +197 to +224
#[function(id = "jobs", description = "List jobs by state")]
pub async fn jobs(&self, input: QueueJobsInput) -> FunctionResult<Option<Value>, ErrorBody> {
if let Err(e) = validate_queue_name(&input.topic) {
return FunctionResult::Failure(e);
}
if let Err(e) = validate_job_state(&input.state) {
return FunctionResult::Failure(e);
}

let limit = input.limit.min(MAX_LIMIT);

match self
.adapter
.list_jobs(&input.topic, &input.state, input.offset, limit)
.await
{
Ok(jobs) => FunctionResult::Success(Some(serde_json::json!({
"jobs": jobs,
"count": jobs.len(),
"offset": input.offset,
"limit": input.limit,
}))),
Err(e) => FunctionResult::Failure(ErrorBody {
code: "jobs_failed".into(),
message: format!("{:?}", e),
}),
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor: Response includes original limit instead of clamped value.

At line 217, the response includes "limit": input.limit (the original user-provided value) rather than the clamped limit variable from line 206. This could be confusing to API consumers who might not understand why they received fewer items than their requested limit.

🔧 Suggested fix
         Ok(jobs) => FunctionResult::Success(Some(serde_json::json!({
             "jobs": jobs,
             "count": jobs.len(),
             "offset": input.offset,
-            "limit": input.limit,
+            "limit": limit,
         }))),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[function(id = "jobs", description = "List jobs by state")]
pub async fn jobs(&self, input: QueueJobsInput) -> FunctionResult<Option<Value>, ErrorBody> {
if let Err(e) = validate_queue_name(&input.topic) {
return FunctionResult::Failure(e);
}
if let Err(e) = validate_job_state(&input.state) {
return FunctionResult::Failure(e);
}
let limit = input.limit.min(MAX_LIMIT);
match self
.adapter
.list_jobs(&input.topic, &input.state, input.offset, limit)
.await
{
Ok(jobs) => FunctionResult::Success(Some(serde_json::json!({
"jobs": jobs,
"count": jobs.len(),
"offset": input.offset,
"limit": input.limit,
}))),
Err(e) => FunctionResult::Failure(ErrorBody {
code: "jobs_failed".into(),
message: format!("{:?}", e),
}),
}
}
#[function(id = "jobs", description = "List jobs by state")]
pub async fn jobs(&self, input: QueueJobsInput) -> FunctionResult<Option<Value>, ErrorBody> {
if let Err(e) = validate_queue_name(&input.topic) {
return FunctionResult::Failure(e);
}
if let Err(e) = validate_job_state(&input.state) {
return FunctionResult::Failure(e);
}
let limit = input.limit.min(MAX_LIMIT);
match self
.adapter
.list_jobs(&input.topic, &input.state, input.offset, limit)
.await
{
Ok(jobs) => FunctionResult::Success(Some(serde_json::json!({
"jobs": jobs,
"count": jobs.len(),
"offset": input.offset,
"limit": limit,
}))),
Err(e) => FunctionResult::Failure(ErrorBody {
code: "jobs_failed".into(),
message: format!("{:?}", e),
}),
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/modules/queue/queue.rs` around lines 197 - 224, The response currently
returns the original user-provided limit (input.limit) instead of the clamped
value; in the jobs function update the success payload to use the clamped limit
variable (computed with input.limit.min(MAX_LIMIT)) when setting the "limit"
field in the JSON response so API consumers see the actual effective limit
returned by list_jobs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant