|
| 1 | +# Aiko Services Framework Architecture |
| 2 | + |
| 3 | +This document provides a comprehensive overview of the Aiko Services framework architecture, showing how the key components Pipeline, PipelineElement, DataScheme, and Actor work together. |
| 4 | + |
| 5 | +## Architecture Diagram |
| 6 | + |
| 7 | +```mermaid |
| 8 | +graph TB |
| 9 | + %% Core Framework Architecture |
| 10 | + subgraph "Aiko Services Framework" |
| 11 | + direction TB |
| 12 | + |
| 13 | + %% Actor Model Layer |
| 14 | + subgraph "Actor Model Layer" |
| 15 | + Service[Service<br/>Base distributed component] |
| 16 | + Actor[Actor<br/>Message-driven processing] |
| 17 | + Service --> Actor |
| 18 | + end |
| 19 | + |
| 20 | + %% Pipeline Layer |
| 21 | + subgraph "Pipeline Execution Layer" |
| 22 | + Pipeline[Pipeline<br/>Orchestrates processing workflow] |
| 23 | + PipelineElement[PipelineElement<br/>Abstract processing unit] |
| 24 | + PipelineElementImpl[PipelineElementImpl<br/>Concrete implementation] |
| 25 | + |
| 26 | + Pipeline --> PipelineElement |
| 27 | + Actor --> PipelineElement |
| 28 | + PipelineElement --> PipelineElementImpl |
| 29 | + end |
| 30 | + |
| 31 | + %% Data Layer |
| 32 | + subgraph "Data Management Layer" |
| 33 | + DataScheme[DataScheme<br/>Abstract data access interface] |
| 34 | + DataSource[DataSource<br/>Input pipeline element] |
| 35 | + DataTarget[DataTarget<br/>Output pipeline element] |
| 36 | + |
| 37 | + PipelineElementImpl --> DataSource |
| 38 | + PipelineElementImpl --> DataTarget |
| 39 | + DataSource --> DataScheme |
| 40 | + DataTarget --> DataScheme |
| 41 | + end |
| 42 | + |
| 43 | + %% Concrete Implementations |
| 44 | + subgraph "Data Scheme Implementations" |
| 45 | + FileScheme[DataSchemeFile<br/>file://] |
| 46 | + ZMQScheme[DataSchemeZMQ<br/>zmq://] |
| 47 | + RTSPScheme[DataSchemeRTSP<br/>rtsp://] |
| 48 | + TTYScheme[DataSchemeTTY<br/>tty://] |
| 49 | + |
| 50 | + DataScheme --> FileScheme |
| 51 | + DataScheme --> ZMQScheme |
| 52 | + DataScheme --> RTSPScheme |
| 53 | + DataScheme --> TTYScheme |
| 54 | + end |
| 55 | + end |
| 56 | + |
| 57 | + %% External Systems |
| 58 | + subgraph "External Systems" |
| 59 | + MQTT[MQTT Broker<br/>Message transport] |
| 60 | + FileSystem[File System<br/>Local/network storage] |
| 61 | + NetworkStreams[Network Streams<br/>RTSP, ZMQ, etc.] |
| 62 | + Terminal[Terminal/Console<br/>Interactive I/O] |
| 63 | + end |
| 64 | + |
| 65 | + %% Processing Flow |
| 66 | + subgraph "Processing Flow" |
| 67 | + direction LR |
| 68 | + Stream[Stream<br/>stream_id, frame_id] |
| 69 | + Frame[Frame<br/>Data payload + metadata] |
| 70 | + StreamEvent[StreamEvent<br/>OKAY | ERROR | STOP] |
| 71 | + |
| 72 | + Stream --> Frame |
| 73 | + Frame --> StreamEvent |
| 74 | + end |
| 75 | + |
| 76 | + %% Pipeline Definition |
| 77 | + subgraph "Pipeline Configuration" |
| 78 | + PipelineJSON[Pipeline Definition<br/>JSON configuration] |
| 79 | + Graph[Graph Structure<br/>Element connections] |
| 80 | + Parameters[Parameters<br/>Runtime configuration] |
| 81 | + Elements[Element Definitions<br/>Input/output schemas] |
| 82 | + |
| 83 | + PipelineJSON --> Graph |
| 84 | + PipelineJSON --> Parameters |
| 85 | + PipelineJSON --> Elements |
| 86 | + end |
| 87 | + |
| 88 | + %% Connections |
| 89 | + Actor -.->|"MQTT Messages"| MQTT |
| 90 | + FileScheme -.->|"Read/Write"| FileSystem |
| 91 | + ZMQScheme -.->|"ZMQ Sockets"| NetworkStreams |
| 92 | + RTSPScheme -.->|"Video Streams"| NetworkStreams |
| 93 | + TTYScheme -.->|"Interactive I/O"| Terminal |
| 94 | + |
| 95 | + Pipeline -.->|"Loads from"| PipelineJSON |
| 96 | + PipelineElement -.->|"Processes"| Stream |
| 97 | + |
| 98 | + %% Data Flow Example |
| 99 | + subgraph "Example Data Flow" |
| 100 | + direction LR |
| 101 | + DS[DataSource<br/>file://input.txt] |
| 102 | + PE1[ProcessingElement<br/>Transform data] |
| 103 | + PE2[ProcessingElement<br/>Filter results] |
| 104 | + DT[DataTarget<br/>zmq://output:5555] |
| 105 | + |
| 106 | + DS --> PE1 |
| 107 | + PE1 --> PE2 |
| 108 | + PE2 --> DT |
| 109 | + end |
| 110 | + |
| 111 | + %% Styling |
| 112 | + classDef actor fill:#e1f5fe |
| 113 | + classDef pipeline fill:#f3e5f5 |
| 114 | + classDef data fill:#e8f5e8 |
| 115 | + classDef external fill:#fff3e0 |
| 116 | + classDef flow fill:#fce4ec |
| 117 | + |
| 118 | + class Service,Actor actor |
| 119 | + class Pipeline,PipelineElement,PipelineElementImpl pipeline |
| 120 | + class DataScheme,DataSource,DataTarget,FileScheme,ZMQScheme,RTSPScheme,TTYScheme data |
| 121 | + class MQTT,FileSystem,NetworkStreams,Terminal external |
| 122 | + class Stream,Frame,StreamEvent,DS,PE1,PE2,DT flow |
| 123 | +``` |
| 124 | + |
| 125 | +## Key Architecture Components |
| 126 | + |
| 127 | +### 1. Actor Model Layer |
| 128 | +- **Service**: Provides the base distributed component functionality with service discovery and registration |
| 129 | +- **Actor**: Extends Service with message-driven processing capabilities, implementing the Actor Model for distributed computation |
| 130 | + |
| 131 | +### 2. Pipeline Execution Layer |
| 132 | +- **Pipeline**: Orchestrates workflow execution and manages collections of PipelineElements |
| 133 | +- **PipelineElement**: Abstract base class defining the interface for all processing units |
| 134 | +- **PipelineElementImpl**: Concrete implementation providing parameter management, logging, and lifecycle methods |
| 135 | + |
| 136 | +### 3. Data Management Layer |
| 137 | +- **DataScheme**: Abstract interface for data access patterns, supporting URL-based routing |
| 138 | +- **DataSource**: Specialized pipeline element for reading data from various sources |
| 139 | +- **DataTarget**: Specialized pipeline element for writing data to various targets |
| 140 | + |
| 141 | +### 4. Data Scheme Implementations |
| 142 | +The framework supports multiple data access protocols through concrete DataScheme implementations: |
| 143 | +- **DataSchemeFile** (`file://`): File system access with glob patterns and template naming |
| 144 | +- **DataSchemeZMQ** (`zmq://`): ZeroMQ socket communication for distributed messaging |
| 145 | +- **DataSchemeRTSP** (`rtsp://`): RTSP video stream processing using GStreamer |
| 146 | +- **DataSchemeTTY** (`tty://`): Terminal/console interactive I/O |
| 147 | + |
| 148 | +### 5. Processing Flow |
| 149 | +- **Stream**: Contains stream_id, frame_id, and processing state |
| 150 | +- **Frame**: Data payload with metadata passed between pipeline elements |
| 151 | +- **StreamEvent**: Processing result states (OKAY, ERROR, STOP) that control flow |
| 152 | + |
| 153 | +### 6. Pipeline Configuration |
| 154 | +Pipelines are defined through JSON configuration files containing: |
| 155 | +- **Graph Structure**: Defines how elements are connected using S-expression syntax |
| 156 | +- **Parameters**: Runtime configuration values |
| 157 | +- **Element Definitions**: Input/output schemas and deployment information |
| 158 | + |
| 159 | +## Example Pipeline Definition |
| 160 | + |
| 161 | +```json |
| 162 | +{ |
| 163 | + "version": 0, |
| 164 | + "name": "p_example", |
| 165 | + "runtime": "python", |
| 166 | + "graph": ["(PE_RandomIntegers PE_Add (random: i))"], |
| 167 | + "parameters": { "constant": 1, "delay": 0.0, "limit": 2, "rate": 1.0 }, |
| 168 | + "elements": [ |
| 169 | + { |
| 170 | + "name": "PE_RandomIntegers", |
| 171 | + "input": [{ "name": "random", "type": "int" }], |
| 172 | + "output": [{ "name": "random", "type": "int" }], |
| 173 | + "deploy": { |
| 174 | + "local": { "module": "aiko_services.examples.pipeline.elements" } |
| 175 | + } |
| 176 | + }, |
| 177 | + { |
| 178 | + "name": "PE_Add", |
| 179 | + "input": [{ "name": "i", "type": "int" }], |
| 180 | + "output": [{ "name": "i", "type": "int" }], |
| 181 | + "deploy": { |
| 182 | + "local": { "module": "aiko_services.examples.pipeline.elements" } |
| 183 | + } |
| 184 | + } |
| 185 | + ] |
| 186 | +} |
| 187 | +``` |
| 188 | + |
| 189 | +## Key Features |
| 190 | + |
| 191 | +1. **Distributed Processing**: Components can run locally or across different processes/hosts |
| 192 | +2. **Asynchronous Message Passing**: MQTT-based communication between distributed actors |
| 193 | +3. **Flexible Data Sources**: URL-based routing to different data access schemes |
| 194 | +4. **Graph-based Workflows**: Pipeline elements connected through directed graph structures |
| 195 | +5. **Extensible Architecture**: Plugin-based system for adding new element types and data schemes |
| 196 | +6. **Real-time Processing**: Low-latency stream processing with frame-by-frame execution |
| 197 | + |
| 198 | +## Integration with External Systems |
| 199 | + |
| 200 | +- **MQTT Broker**: Handles distributed message passing between actors |
| 201 | +- **File Systems**: Local and network file access through DataSchemeFile |
| 202 | +- **Network Streams**: RTSP video streams, ZMQ sockets for distributed communication |
| 203 | +- **Interactive Terminals**: Console I/O for debugging and interactive processing |
| 204 | + |
| 205 | +This architecture enables building complex, distributed data processing workflows for AIoT, Machine Learning, Media streaming, and Robotics applications while maintaining consistent interfaces and message passing semantics. |
0 commit comments