Skip to content

Commit 106d459

Browse files
committed
mkdocs
1 parent ece445f commit 106d459

File tree

13 files changed

+1034
-0
lines changed

13 files changed

+1034
-0
lines changed

docs/api/cli.md

Whitespace-only changes.

docs/api/messages.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: aact.Message
2+
3+
::: aact.messages.DataModel

docs/api/nodes.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
AAct nodes are simply classes which inherit from `Node` and implements different ways of handling and sending messages.
2+
3+
::: aact.Node
4+
options:
5+
show_root_heading: true
6+
merge_init_into_class: false
7+
group_by_category: false
8+
9+
::: aact.NodeFactory
10+
options:
11+
show_root_heading: true

docs/assets/aact.svg

Lines changed: 2 additions & 0 deletions
Loading

docs/assets/favicon.svg

Lines changed: 2 additions & 0 deletions
Loading

docs/index.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# What is AAct?
2+
3+
AAct is designed for communicating sensors, neural networks, agents, users, and environments.
4+
5+
6+
<details>
7+
<summary>Can you expand on that?</summary>
8+
9+
AAct is a Python library for building asynchronous, actor-based, concurrent systems.
10+
Specifically, it is designed to be used in the context of building systems with
11+
components that communicate with each other but don't block each other.
12+
</details>
13+
14+
## How does AAct work?
15+
16+
AAct is built around the concept of nodes and dataflow, where nodes are self-contained units
17+
which receive messages from input channels, process the messages, and send messages to output channels.
18+
Nodes are connected to each other to form a dataflow graph, where messages flow from one node to another.
19+
Each node runs in its own event loop, and the nodes communicate with each other using Redis Pub/Sub.

docs/install.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Quickstart
2+
3+
## Installation
4+
5+
System requirement:
6+
7+
1. Python 3.10 or higher
8+
2. Redis server
9+
10+
<details>
11+
12+
<summary>Redis installation</summary>
13+
14+
The easiest way to install Redis is to use Docker:
15+
```bash
16+
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
17+
```
18+
According to your system, you can also install Redis from the official website: https://redis.io/download
19+
20+
Note: we will only require a standard Redis server (without RedisJSON / RedisSearch) in this library.
21+
22+
</details>
23+
24+
```bash
25+
pip install aact
26+
```
27+
28+
<details>
29+
<summary> from source </summary>
30+
31+
```bash
32+
git clone https://github.com/ProKil/aact.git
33+
cd aact
34+
pip install .
35+
```
36+
37+
For power users, please use `uv` for package management.
38+
</details>
39+
40+
41+
## Quick Start Example
42+
43+
Assuming your Redis is hosted on `localhost:6379` using docker.
44+
You can create a `dataflow.toml` file:
45+
46+
```toml
47+
redis_url = "redis://localhost:6379/0" # required
48+
49+
[[nodes]]
50+
node_name = "print"
51+
node_class = "print"
52+
53+
[nodes.node_args.print_channel_types]
54+
"tick/secs/1" = "tick"
55+
56+
[[nodes]]
57+
node_name = "tick"
58+
node_class = "tick"
59+
```
60+
61+
To run the dataflow:
62+
```bash
63+
aact run-dataflow dataflow.toml
64+
```
65+
66+
This will start the `tick` node and the `print` node. The `tick` node sends a message every second to the `print` node, which prints the message to the console.

docs/plugins/griffe_doclinks.py

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
from __future__ import annotations
2+
3+
import ast
4+
import re
5+
from functools import partial
6+
from pathlib import Path
7+
from typing import Any
8+
9+
from griffe import Extension, Inspector, ObjectNode, Visitor, get_logger
10+
from griffe import Object as GriffeObject
11+
from pymdownx.slugs import slugify
12+
13+
DOCS_PATH = Path(__file__).parent.parent
14+
slugifier = slugify(case="lower")
15+
logger = get_logger("griffe_docklinks")
16+
17+
18+
def find_heading(content: str, slug: str, file_path: Path) -> tuple[str, int]:
19+
for m in re.finditer("^#+ (.+)", content, flags=re.M):
20+
heading = m.group(1)
21+
h_slug = slugifier(heading, "-")
22+
if h_slug == slug:
23+
return heading, m.end()
24+
raise ValueError(f"heading with slug {slug!r} not found in {file_path}")
25+
26+
27+
def insert_at_top(path: str, api_link: str) -> str:
28+
rel_file = path.rstrip("/") + ".md"
29+
file_path = DOCS_PATH / rel_file
30+
content = file_path.read_text()
31+
second_heading = re.search("^#+ ", content, flags=re.M)
32+
assert second_heading, "unable to find second heading in file"
33+
first_section = content[: second_heading.start()]
34+
35+
if f"[{api_link}]" not in first_section:
36+
logger.debug(
37+
'inserting API link "%s" at the top of %s',
38+
api_link,
39+
file_path.relative_to(DOCS_PATH),
40+
)
41+
file_path.write_text(
42+
'??? api "API Documentation"\n'
43+
f" [`{api_link}`][{api_link}]<br>\n\n"
44+
f"{content}"
45+
)
46+
47+
heading = file_path.stem.replace("_", " ").title()
48+
return f'!!! abstract "Usage Documentation"\n [{heading}](../{rel_file})\n'
49+
50+
51+
def replace_links(m: re.Match[str], *, api_link: str) -> str:
52+
path_group = m.group(1)
53+
if "#" not in path_group:
54+
# no heading id, put the content at the top of the page
55+
return insert_at_top(path_group, api_link)
56+
57+
usage_path, slug = path_group.split("#", 1)
58+
rel_file = usage_path.rstrip("/") + ".md"
59+
file_path = DOCS_PATH / rel_file
60+
content = file_path.read_text()
61+
heading, heading_end = find_heading(content, slug, file_path)
62+
63+
next_heading = re.search("^#+ ", content[heading_end:], flags=re.M)
64+
if next_heading:
65+
next_section = content[heading_end : heading_end + next_heading.start()]
66+
else:
67+
next_section = content[heading_end:]
68+
69+
if f"[{api_link}]" not in next_section:
70+
logger.debug(
71+
'inserting API link "%s" into %s',
72+
api_link,
73+
file_path.relative_to(DOCS_PATH),
74+
)
75+
file_path.write_text(
76+
f"{content[:heading_end]}\n\n"
77+
'??? api "API Documentation"\n'
78+
f" [`{api_link}`][{api_link}]<br>"
79+
f"{content[heading_end:]}"
80+
)
81+
82+
return (
83+
f'!!! abstract "Usage Documentation"\n [{heading}](../{rel_file}#{slug})\n'
84+
)
85+
86+
87+
def update_docstring(obj: GriffeObject) -> str:
88+
return re.sub(
89+
r"usage[\- ]docs: ?https://docs\.pydantic\.dev/.+?/(\S+)",
90+
partial(replace_links, api_link=obj.path),
91+
obj.docstring.value,
92+
flags=re.I,
93+
)
94+
95+
96+
class UpdateDocstringsExtension(Extension):
97+
def on_instance(
98+
self,
99+
*,
100+
node: ast.AST | ObjectNode,
101+
obj: GriffeObject,
102+
agent: Visitor | Inspector,
103+
**kwargs: Any,
104+
) -> None:
105+
if not obj.is_alias and obj.docstring is not None:
106+
obj.docstring.value = update_docstring(obj)

docs/usage.md

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
## Usage
2+
3+
### CLI
4+
5+
You can start from CLI and progress to more advanced usages.
6+
7+
1. `aact --help` to see all commands
8+
2. `aact run-dataflow <dataflow_name.toml>` to run a dataflow. Check [Dataflow.toml syntax](#dataflowtoml-syntax)
9+
3. `aact run-node` to run one node in a dataflow.
10+
4. `aact draw-dataflow <dataflow_name_1.toml> <dataflow_name_2.toml> --svg-path <output.svg>` to draw dataflow.
11+
12+
13+
### Customized Node
14+
15+
Here is the minimal knowledge you would need to implement a customized node.
16+
17+
```python
18+
from aact import Node, NodeFactory, Message
19+
20+
@NodeFactory.register("node_name")
21+
class YourNode(Node[your_input_type, your_output_type]):
22+
23+
# event_handler is the only function your **have** to implement
24+
def event_handler(self, input_channel: str, input_message: Message[your_input_type]) -> AsyncIterator[str, Message[your_output_type]]:
25+
match input_channel:
26+
case input_channel_1:
27+
<do_your_stuff>
28+
yield output_channel_1, Message[your_output_type](data=your_output_message)
29+
case input_channel_2:
30+
...
31+
32+
# implement other functions: __init__, _wait_for_input, event_loop, __aenter__, __aexit__
33+
34+
# To run a node without CLI
35+
async with NodeFactory.make("node_name", arg_1, arg_2) as node:
36+
await node.event_loop()
37+
```
38+
39+
## Concepts
40+
41+
There are three important concepts to understand aact.
42+
43+
```mermaid
44+
graph TD
45+
n1[Node 1] -->|channel_1| n2[Node 2]
46+
```
47+
48+
### Nodes
49+
50+
Nodes (`aact.Nodes`) are designed to run in parallel asynchronously. This design is especially useful for deploying the nodes onto different machines.
51+
A node should inherit `aact.Node` class, which extends `pydantic.BaseModel`.
52+
53+
### Channels
54+
55+
Channel is an inherited concept from Redis Pub/Sub. You can think of it as a radio channel.
56+
Multiple publishers (nodes) can publish messages to the same channel, and multiple subscribers (nodes) can subscribe to the same channel.
57+
58+
### Messages
59+
60+
Messages are the data sent through the channels. Each message type is a class in the format of `Message[T]` , where `T` is a subclass or a union of subclasses of `DataModel`.
61+
62+
#### Customized Message Type
63+
64+
If you want to create a new message type, you can create a new class that inherits from `DataModel`.
65+
```python
66+
@DataModelFactory.register("new_type")
67+
class NewType(DataModel):
68+
new_type_data: ... = ...
69+
70+
71+
# For example
72+
@DataModelFactory.register("integer")
73+
class Integer(DataModel):
74+
integer_data: int = Field(default=0)
75+
```
76+
77+
## Dataflow.toml syntax
78+
79+
```toml
80+
redis_url = "redis://..." # required
81+
extra_modules = ["package1.module1", "package2.module2"] # optional
82+
83+
[[nodes]]
84+
node_name = "node_name_1" # A unique name in the dataflow
85+
node_class = "node_class_1" # node_class should match the class name passed into NodeFactory.register
86+
87+
[node.node_args]
88+
node_arg_1 = "value_1"
89+
90+
[[nodes]]
91+
node_name = "node_name_2"
92+
node_class = "node_class_2"
93+
94+
# ...
95+
```

docs/why.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
## Why should I use AAct?
2+
3+
1. Non-blocking: the nodes are relatively independent of each other, so if you are waiting for users' input,
4+
you can still process sensor data in the background.
5+
2. Scalable: you can a large number of nodes on one machine or distribute them across multiple machines.
6+
3. Hackable: you can easily design your own nodes and connect them to the existing nodes.
7+
4. Zero-code configuration: the `dataflow.toml` allows you to design the dataflow graph without writing any
8+
Python code.

0 commit comments

Comments
 (0)