Skip to content
Meyn edited this page Dec 15, 2025 · 2 revisions

Welcome to the $\text{\color{gray}Shard.\color{orange}Requests}$ Wiki

This comprehensive documentation covers everything you need to know about the Requests library a feature-rich async task management framework for .NET applications.

Table of Contents


What is Requests?

$\text{\color{gray}Shard.\color{orange}Requests}$ is a sophisticated async task management library that transforms complex, asynchronous workflows into manageable, priority-driven operations. It provides control over async operations without the complexity typically associated with advanced task scheduling systems.

Supports: .NET 9, .NET 10

The Problem It Solves

Modern applications often face challenges with:

  • Managing hundreds or thousands of concurrent async operations
  • Prioritizing critical tasks over routine background work
  • Handling failures with intelligent retry logic
  • Pausing and resuming long-running operations
  • Tracking progress across multiple parallel tasks
  • Controlling parallelism to prevent resource exhaustion

$\text{\color{orange}Requests}$ provides a unified solution to all these challenges with a clean API.


Core Concepts

$\text{\color{lightblue}Requests}$

A Request is the fundamental unit of work in the library. It encapsulates:

  • The async operation to execute
  • Retry logic and failure handling
  • Priority level for scheduling
  • State management (Idle → Running → Completed/Failed/Cancelled)
  • Lifecycle callbacks (started, completed, failed, cancelled)
  • Pause/resume capabilities

$\text{\color{orange}Request Handlers}$

Request Handlers are execution engines that process requests. Two types are available:

  • $\text{\color{green}ParallelRequestHandler}$: Executes multiple requests concurrently with configurable parallelism
  • $\text{\color{blue}SequentialRequestHandler}$: Processes requests one at a time in priority order

$\text{\color{purple}Containers}$

Containers group multiple requests into a single logical unit:

  • $\text{\color{orange}RequestContainer}$: Aggregates requests with unified control (start, pause, cancel all)
  • $\text{\color{green}ProgressableContainer}$: Extends containers with aggregated progress tracking

$\text{\color{red}Priority Scheduling}$

Requests are scheduled based on priority using an efficient quaternary min-heap:

  • $\text{\color{red}High Priority}$ (0): Processed first
  • $\text{\color{orange}Normal Priority}$ (1): Standard processing
  • $\text{\color{blue}Low Priority}$ (2): Processed last

Within the same priority level, requests are processed in FIFO order.


Installation

Via NuGet Package Manager

dotnet add package Shard.Requests

Via Package Manager Console

Install-Package Shard.Requests

Package Links


Quick Start Guide

Basic Usage with OwnRequest

using Shard.Requests;

// Simple request wrapper
var request = new OwnRequest(async token =>
{
    // Your async logic here
    await Task.Delay(1000, token);
    Console.WriteLine("Request completed!");
    return true; // Return true for success, false to trigger retry
});

// Wait for completion
await request.Task;

Configuring Request Options

var request = new OwnRequest(async token =>
{
    var response = await httpClient.GetAsync(url, token);
    return response.IsSuccessStatusCode;
}, new RequestOptions<VoidStruct, VoidStruct>
{
    Priority = RequestPriority.High,
    NumberOfAttempts = 5,
    DelayBetweenAttempts = TimeSpan.FromSeconds(2),
    RequestStarted = req => Console.WriteLine("Starting..."),
    RequestCompleted = (req, result) => Console.WriteLine("Success!"),
    RequestFailed = (req, result) => Console.WriteLine("Failed after retries")
});

Using Request Containers

// Create multiple requests
var request1 = new OwnRequest(async token => { /* work */ return true; });
var request2 = new OwnRequest(async token => { /* work */ return true; });
var request3 = new OwnRequest(async token => { /* work */ return true; });

// Group them in a container
var container = new RequestContainer<OwnRequest>(request1, request2, request3);

// Control all at once
container.Start();
await container.Task; // Wait for all to complete

// Or pause/cancel all
// container.Pause();
// container.Cancel();

Progress Tracking

// Create progressable requests (implement IProgressableRequest)
var downloads = new List<ProgressableRequest>();
for (int i = 0; i < 10; i++)
{
    downloads.Add(new ProgressableRequest(() => DownloadFile(i)));
}

// Track aggregated progress
var container = new ProgressableContainer<ProgressableRequest>(downloads.ToArray());
container.Progress.ProgressChanged += (sender, progress) =>
{
    Console.WriteLine($"Overall progress: {progress:P}");
};

await container.Task;

Key Features

$\text{\color{orange}1. Priority-Based Scheduling}$

Requests are processed based on priority levels. High-priority requests automatically jump ahead in the queue:

var urgentRequest = new OwnRequest(async token => { /* critical work */ return true; },
    new() { Priority = RequestPriority.High });

var normalRequest = new OwnRequest(async token => { /* routine work */ return true; },
    new() { Priority = RequestPriority.Normal });

Implementation: Uses a quaternary min-heap for O(log n) insertion and removal with FIFO ordering within priority levels.

$\text{\color{green}2. Automatic Retry Logic}$

Configure intelligent retry behavior with delays:

new RequestOptions<VoidStruct, VoidStruct>
{
    NumberOfAttempts = 5,           // Retry up to 5 times
    DelayBetweenAttempts = TimeSpan.FromSeconds(3)  // Wait 3s between retries
}

$\text{\color{blue}3. Pause and Resume}$

Long-running requests can be paused and resumed without losing progress:

protected override async Task<RequestReturn> RunRequestAsync()
{
    for (int i = 0; i < 1000; i++)
    {
        await Request.Yield(); // Cooperative yield point - checks for pause/cancel
        DoWorkUnit(i);
    }
    return new RequestReturn { Successful = true };
}

// From outside
request.Pause();  // Pauses at next Yield() point
// Later...
request.Start();  // Resume execution

$\text{\color{purple}4. Dynamic Parallelism Control}$

Handlers automatically adjust concurrency based on system load:

var handler = new ParallelRequestHandler
{
    AutoParallelism = () => Environment.ProcessorCount, // Dynamic calculation
    MaxParallelism = 50  // Upper limit
};

Or set static parallelism:

handler.StaticDegreeOfParallelism = 10; // Always 10 concurrent requests

$\text{\color{red}5. State Management}$

Requests flow through a well-defined state machine:

Paused → Idle → Waiting → Running → Completed/Failed/Cancelled

Track state changes:

request.StateChanged += (sender, newState) =>
{
    Console.WriteLine($"State changed to: {newState}");
};

$\text{\color{orange}6. Progress Tracking}$

ProgressableContainer aggregates progress from multiple requests using incremental averaging (O(1) updates):

var container = new ProgressableContainer<MyProgressableRequest>(...);
container.Progress.ProgressChanged += (s, progress) =>
{
    progressBar.Value = progress; // 0.0 to 1.0
};

$\text{\color{green}7. Request Chaining}$

Chain requests to execute sequentially without re-queuing:

var downloadRequest = new OwnRequest(async token => { /* download */ return true; });
var processRequest = new OwnRequest(async token => { /* process */ return true; });

downloadRequest.Options.SubsequentRequest = processRequest;
// When downloadRequest completes, processRequest starts immediately

$\text{\color{blue}8. Lifecycle Callbacks}$

Hook into request lifecycle events:

new RequestOptions<VoidStruct, VoidStruct>
{
    RequestStarted = req => Log("Started"),
    RequestCompleted = (req, result) => Log("Completed successfully"),
    RequestFailed = (req, result) => Log("Failed after all retries"),
    RequestCancelled = req => Log("Cancelled by user"),
    RequestExceptionOccurred = (req, ex) => Log($"Exception: {ex.Message}")
}

$\text{\color{purple}9. Deploy Delays}$

Schedule requests to start after a delay:

var request = new OwnRequest(async token => { /* work */ return true; },
    new() { DeployDelay = TimeSpan.FromMinutes(5) });

request.Start(); // Will wait 5 minutes before transitioning to Idle and queuing

$\text{\color{red}10. Thread-Safe Operations}$

All state transitions use lock-free atomic operations (CompareExchange) for maximum performance and safety.


Architecture Overview

Type Hierarchy

IRequest (interface)
├── Request<TOptions, TCompleted, TFailed> (abstract)
│   └── OwnRequest (concrete)
│
IRequestContainer<TRequest> (interface)
├── RequestContainer<TRequest>
│   └── ProgressableContainer<TRequest>
│
IRequestHandler (interface)
├── ParallelRequestHandler
└── SequentialRequestHandler

Request Execution Flow

1. Request created with options
2. AutoStart triggers (optional) → State: Idle
3. Added to RequestHandler's priority queue
4. Handler dequeues based on priority
5. State: Running → Executes RunRequestAsync()
6. On success: State: Completed
7. On failure: Retry if attempts remain → State: Idle → back to queue
8. On final failure: State: Failed
9. Callbacks invoked on SynchronizationContext

State Machine

Individual Request States:

  • $\text{\color{gray}Idle}$: Ready to be processed
  • $\text{\color{blue}Waiting}$: Delayed start (DeployDelay)
  • $\text{\color{green}Running}$: Currently executing
  • $\text{\color{orange}Paused}$: Temporarily stopped (can resume)
  • $\text{\color{green}Completed}$: Successfully finished (terminal)
  • $\text{\color{red}Failed}$: Exhausted all retries (terminal)
  • $\text{\color{red}Cancelled}$: User-cancelled (terminal)

Valid Transitions:

Paused   → Idle | Waiting | Cancelled
Idle     → Running | Cancelled
Waiting  → Idle | Cancelled
Running  → Paused | Completed | Failed | Cancelled
Terminal → (no transitions allowed)

Use Cases

$\text{\color{orange}Batch Processing}$

Download 1000 files with controlled parallelism and automatic retry:

var handler = new ParallelRequestHandler { StaticDegreeOfParallelism = 10 };
var requests = urls.Select(url => new OwnRequest(async token =>
{
    await DownloadFileAsync(url, token);
    return true;
}, new() {
    Handler = handler,
    NumberOfAttempts = 3,
    DelayBetweenAttempts = TimeSpan.FromSeconds(2)
})).ToList();

await handler.Task; // Wait for all downloads

$\text{\color{green}Game Development}$

Priority-based asset loading with pause when game is backgrounded:

var criticalAssets = new OwnRequest(LoadCriticalAssets,
    new() { Priority = RequestPriority.High });
var backgroundAssets = new OwnRequest(LoadBackgroundAssets,
    new() { Priority = RequestPriority.Low });

// When game loses focus:
ParallelRequestHandler.MainRequestHandler.Pause();

// When game regains focus:
ParallelRequestHandler.MainRequestHandler.Start();

$\text{\color{blue}API Rate Limiting}$

Throttle API calls with dynamic parallelism:

var handler = new ParallelRequestHandler
{
    AutoParallelism = () => apiClient.GetRateLimitRemaining() / 10,
    MaxParallelism = 20
};

foreach (var dataItem in dataToSync)
{
    handler.Add(new OwnRequest(async token =>
    {
        await apiClient.SyncAsync(dataItem, token);
        return true;
    }));
}

$\text{\color{purple}Long-Running Operations}$

Multi-hour data processing with save/resume:

protected override async Task<RequestReturn> RunRequestAsync()
{
    for (int i = checkpointIndex; i < totalItems; i++)
    {
        await Request.Yield(); // Allows pause
        ProcessItem(i);

        if (i % 100 == 0)
            SaveCheckpoint(i); // Persist progress
    }
    return new RequestReturn { Successful = true };
}

// User can pause/resume at any Yield() point

Performance Characteristics

Operation Complexity Notes
Enqueue Request $\text{\color{orange}O(log n)}$ Quaternary heap insertion
Dequeue Request $\text{\color{orange}O(log n)}$ Extract min from heap
Priority Change $\text{\color{orange}O(log n)}$ Remove and re-insert
Yield Check $\text{\color{green}O(1)}$ Fast path when not paused
Progress Update $\text{\color{green}O(1)}$ Incremental averaging
Container State $\text{\color{blue}O(m)}$ m = child requests

Thread Safety

All components are thread-safe:

  • State transitions: Lock-free atomic operations (CompareExchange)
  • Priority queue: Lock-based synchronization
  • Containers: Spin-lock for write operations
  • Callbacks: Marshaled to original SynchronizationContext

Next Steps

Explore the detailed documentation:


Support


License: MIT — Free for commercial and personal use

Built for developers who need industrial-strength async control

Clone this wiki locally