Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 53 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Ratio1 Edge Node


Welcome to the **Ratio1 Edge Node** repository, formerly known as the **Naeural Edge Protocol Edge Node**. As a pivotal component of the Ratio1 ecosystem, this Edge Node software empowers a decentralized, privacy-preserving, and secure edge computing network. By enabling a collaborative network of edge nodes, Ratio1 facilitates the secure sharing of resources and the seamless execution of computation tasks across diverse devices.

Documentation sections:
Expand All @@ -16,12 +17,12 @@ Documentation sections:

## Introduction

The Ratio1 Edge Node is a meta Operating System designed to operate on edge devices, providing them the essential functionality required to join and thrive within the Ratio1 network. Each Edge Node manages the device’s resources, executes computation tasks efficiently, and communicates securely with other nodes in the network. Leveraging the powerful Ratio1 core libraries (formely known as Naeural Edge Protocol libraries) `naeural_core` and `ratio1` the Ratio1 Edge Node offers out-of-the-box usability starting in 2025. Users can deploy the Edge Node and SDK (`ratio1`) effortlessly without the need for intricate configurations, local subscriptions, tenants, user accounts, passwords, or broker setups.
The Ratio1 Edge Node is a meta Operating System designed to operate on edge devices, providing them the essential functionality required to join and thrive within the Ratio1 network. Each Edge Node manages the device’s resources, executes computation tasks efficiently, and communicates securely with other nodes in the network. Leveraging the powerful Ratio1 core libraries (formerly known as Naeural Edge Protocol libraries) `naeural_core` and the Ratio1 SDK (`ratio1_sdk`, published on PyPI as `ratio1`), the Ratio1 Edge Node offers out-of-the-box usability starting in 2025 without intricate configurations, local subscriptions, tenants, user accounts, passwords, or broker setups.

## Related Repositories

- [ratio1/naeural_core](https://github.com/ratio1/naeural_core) provides the modular pipeline engine that powers data ingestion, processing, and serving inside this node. Extend or troubleshoot runtime behavior by mirroring the folder layout in `extensions/` against the upstream modules.
- [Ratio1/ratio1_sdk](https://github.com/Ratio1/ratio1_sdk) is the client toolkit for building and dispatching jobs to Ratio1 nodes. Its tutorials pair with the workflows in `plugins/business/tutorials/` and are the best place to validate end-to-end scenarios.
- [Ratio1/ratio1_sdk](https://github.com/Ratio1/ratio1_sdk) is the client toolkit for building and dispatching jobs to Ratio1 nodes (published on PyPI as `ratio1`). Its tutorials pair with the workflows in `plugins/business/tutorials/` and are the best place to validate end-to-end scenarios.

When developing custom logic, install the three repositories in the same virtual environment (`pip install -e . ../naeural_core ../ratio1_sdk`) so interface changes remain consistent across the stack.

Expand All @@ -33,28 +34,30 @@ When developing custom logic, install the three repositories in the same virtual
Deploying a Ratio1 Edge Node within a development network is straightforward. Execute the following Docker command to launch the node making sure you mount a persistent volume to the container to preserve the node data between restarts:

```bash
docker run -d --rm --name r1node --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:develop
docker run -d --rm --name r1node --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:devnet
```

- `-d`: Runs the container in the background.
- `--rm`: Removes the container upon stopping.
- `--name r1node`: Assigns the name `r1node` to the container.
- `--pull=always`: Ensures the latest image version is always pulled.
- `ratio1/edge_node:develop`: Specifies the Docker image to run.
- `ratio1/edge_node:devnet`: Specifies the devnet image; use `:mainnet` or `:testnet` for those networks.
- `-v r1vol:/edge_node/_local_cache/`: Mounts the `r1vol` volume to the `/edge_node/_local_cache/` directory within the container.

Architecture-specific variants (for example `:devnet-arm64`, `:devnet-tegra`, `:devnet-amd64-cpu`) will follow; pick the tag that matches your hardware once available.

This command initializes the Ratio1 Edge Node in development mode, automatically connecting it to the Ratio1 development network and preparing it to receive computation tasks while ensuring that all node data is stored in `r1vol`, preserving it between container restarts.


If for some reason you encounter issues when running the Edge Node, you can try to run the container with the `--platform linux/amd64` flag to ensure that the container runs on the correct platform.

```bash
docker run -d --rm --name r1node --platform linux/amd64 --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:develop
docker run -d --rm --name r1node --platform linux/amd64 --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:devnet
```
Also, if you have GPU(s) on your machine, you can enable GPU support by adding the `--gpus all` flag to the Docker command. This flag allows the Edge Node to utilize the GPU(s) for computation tasks.

```bash
docker run -d --rm --name r1node --gpus all --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:develop
docker run -d --rm --name r1node --gpus all --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:devnet
```

This will ensure that your node will be able to utilize the GPU(s) for computation tasks and will accept training and inference jobs that require GPU acceleration.
Expand All @@ -64,12 +67,12 @@ This will ensure that your node will be able to utilize the GPU(s) for computati
If you want to run multiple Edge Nodes on the same machine, you can do so by specifying different names for each container but more importantly, you need to specify different volumes for each container to avoid conflicts between the nodes. You can do this by creating a new volume for each node and mounting it to the container as follows:

```bash
docker run -d --rm --name r1node1 --pull=always -v r1vol1:/edge_node/_local_cache/ ratio1/edge_node:develop
docker run -d --rm --name r1node2 --pull=always -v r1vol2:/edge_node/_local_cache/ ratio1/edge_node:develop
docker run -d --rm --name r1node1 --pull=always -v r1vol1:/edge_node/_local_cache/ ratio1/edge_node:devnet
docker run -d --rm --name r1node2 --pull=always -v r1vol2:/edge_node/_local_cache/ ratio1/edge_node:devnet
```

Now you can run multiple Edge Nodes on the same machine without any conflicts between them.
>NOTE: If you are running multiple nodes on the same machine it is recommended to use docker-compose to manage the nodes. You can find an example of how to run multiple nodes on the same machine using docker-compose in the [Running multiple nodes on the same machine](#running-multiple-nodes-on-the-same-machine) section.
>NOTE: If you are running multiple nodes on the same machine it is recommended to use docker-compose to manage the nodes. You can find a docker-compose example in the section below.


## Inspecting the Edge Node
Expand Down Expand Up @@ -145,6 +148,8 @@ The [Ratio1 SDK](https://github.com/Ratio1/ratio1_sdk) is the recommended way to
pip install -e ../ratio1_sdk
```

If you prefer the published package, install from PyPI via `pip install ratio1`.

- Use the `nepctl` (formerly `r1ctl`) CLI that ships with the SDK to inspect the network, configure clients, and dispatch jobs.
- Explore `ratio1_sdk/tutorials/` for end-to-end examples; most have matching runtime counterparts in `plugins/business/tutorials/` inside this repository.
- SDK releases 2.6+ perform automatic dAuth configuration. After whitelisting your client, you can submit jobs without additional secrets.
Expand Down Expand Up @@ -226,6 +231,7 @@ Lets suppose you have the following node data:
"whitelist": [
"0xai_AthDPWc_k3BKJLLYTQMw--Rjhe3B6_7w76jlRpT6nDeX"
]
}
}
```

Expand All @@ -250,6 +256,7 @@ docker exec r1node get_node_info
"whitelist": [
"0xai_AthDPWc_k3BKJLLYTQMw--Rjhe3B6_7w76jlRpT6nDeX"
]
}
}
```

Expand Down Expand Up @@ -286,7 +293,7 @@ If you want to run multiple nodes on the same machine the best option is to use
```yaml
services:
r1node1:
image: ratio1/edge_node:testnet
image: ratio1/edge_node:devnet
container_name: r1node1
platform: linux/amd64
restart: always
Expand All @@ -297,7 +304,7 @@ services:
- "com.centurylinklabs.watchtower.stop-signal=SIGINT"

r1node2:
image: ratio1/edge_node:testnet
image: ratio1/edge_node:devnet
container_name: r1node2
platform: linux/amd64
restart: always
Expand Down Expand Up @@ -350,7 +357,7 @@ docker-compose down

Now, lets dissect the `docker-compose.yml` file:
- we have a variable number of nodes - in our case 2 nodes - `r1node1` and `r1node2` as services (we commented out the third node for simplicity)
- each node is using the `ratio1/edge_node:testnet` image
- each node is using the `ratio1/edge_node:devnet` image (swap the tag for `:mainnet` or `:testnet` as needed; architecture-specific variants such as `-arm64`, `-tegra`, `-amd64-cpu` will follow)
- each node has own unique volume mounted to it
- we have a watchtower service that will check for new images every 1 minute and will update the nodes if a new image is available

Expand All @@ -375,6 +382,7 @@ For inquiries regarding the funding and its impact on this project, please conta

## Citation


If you use the Ratio1 Edge Node in your research or projects, please cite it as follows:

```bibtex
Expand All @@ -385,3 +393,36 @@ If you use the Ratio1 Edge Node in your research or projects, please cite it as
howpublished = {\url{https://github.com/Ratio1/edge_node}},
}
```


Additional publications and references:

```bibtex
@inproceedings{Damian2025CSCS,
author = {Damian, Andrei Ionut and Bleotiu, Cristian and Grigoras, Marius and
Butusina, Petrica and De Franceschi, Alessandro and Toderian, Vitalii and
Tapus, Nicolae},
title = {Ratio1 meta-{OS} -- decentralized {MLOps} and beyond},
booktitle = {2025 25th International Conference on Control Systems and Computer Science (CSCS)},
year = {2025},
pages = {258--265},
address = {Bucharest, Romania},
month = {May 27--30},
doi = {10.1109/CSCS66924.2025.00046},
isbn = {979-8-3315-7343-0},
issn = {2379-0482},
publisher = {IEEE}
}

@misc{Damian2025arXiv,
title = {Ratio1 -- AI meta-OS},
author = {Damian, Andrei and Butusina, Petrica and De Franceschi, Alessandro and
Toderian, Vitalii and Grigoras, Marius and Bleotiu, Cristian},
year = {2025},
month = {September},
eprint = {2509.12223},
archivePrefix = {arXiv},
primaryClass = {cs.OS},
doi = {10.48550/arXiv.2509.12223}
}
```
2 changes: 1 addition & 1 deletion extensions/business/container_apps/container_app_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -2065,7 +2065,7 @@ def start_container(self):

self.P(log_str)

nano_cpu_limit = self._cpu_limit * 1_000_000_000
nano_cpu_limit = int(self._cpu_limit * 1_000_000_000)
mem_reservation = f"{parse_memory_to_mb(self._mem_limit, 0.9)}m"

run_kwargs = dict(
Expand Down
4 changes: 2 additions & 2 deletions extensions/business/container_apps/container_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,7 @@ def _setup_resource_limits_and_ports(self):

container_resources = self.cfg_container_resources
if isinstance(container_resources, dict) and len(container_resources) > 0:
self._cpu_limit = int(container_resources.get("cpu", DEFAULT_CPU_LIMIT))
self._cpu_limit = float(container_resources.get("cpu", DEFAULT_CPU_LIMIT))
self._gpu_limit = container_resources.get("gpu", DEFAULT_GPU_LIMIT)
self._mem_limit = container_resources.get("memory", DEFAULT_MEM_LIMIT)

Expand Down Expand Up @@ -417,7 +417,7 @@ def _setup_resource_limits_and_ports(self):
# endif main_port_mapped
else:
# No container resources specified, use defaults
self._cpu_limit = DEFAULT_CPU_LIMIT
self._cpu_limit = float(DEFAULT_CPU_LIMIT)
self._gpu_limit = DEFAULT_GPU_LIMIT
self._mem_limit = DEFAULT_MEM_LIMIT

Expand Down
42 changes: 41 additions & 1 deletion extensions/business/cybersec/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,44 @@
## RedMesh
- folder: extensions/business/cybersec/red_mesh
- description: A framework for distributed orchestrated penetration testing and vulnerability assessment.
- version: v1 (Alpha) as of 2025-09-30
- version: v1 (Alpha) as of 2025-09-30

### Features

**Distributed Scanning**
- Port scanning distributed across heterogeneous network workers
- Distribution strategies: `SLICE` (divide ports across workers) or `MIRROR` (full redundancy)
- Port ordering: `SHUFFLE` (randomized for stealth) or `SEQUENTIAL`

**Service Detection**
- Banner grabbing and protocol identification
- Detection modules for FTP, SSH, HTTP, and other common services

**Web Vulnerability Testing**
- SQL injection detection
- Cross-site scripting (XSS) testing
- Directory traversal checks
- Security header analysis

**Run Modes**
- `SINGLEPASS`: One-time scan with aggregated report
- `CONTINUOUS_MONITORING`: Repeated scans at configurable intervals for change detection

**Stealth Capabilities**
- "Dune sand walking": Random delays between operations for IDS evasion
- Configurable `scan_min_delay` and `scan_max_delay` parameters

**Distributed Architecture**
- Job coordination via CStore (distributed state)
- Report storage in R1FS (IPFS-based content-addressed storage)
- Network-wide job tracking and worker status monitoring

### API Endpoints
- `POST /launch_test` - Start a new pentest job
- `GET /get_job_status` - Check job progress or retrieve results
- `GET /list_features` - List available scanning/testing features
- `GET /list_network_jobs` - List jobs across the network
- `GET /list_local_jobs` - List jobs on current node
- `GET /stop_and_delete_job` - Stop and remove a job
- `POST /stop_monitoring` - Stop continuous monitoring (SOFT/HARD)
- `GET /get_report` - Retrieve report by CID from R1FS
99 changes: 99 additions & 0 deletions extensions/business/cybersec/red_mesh/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
"""
RedMesh constants and feature catalog definitions.
"""

FEATURE_CATALOG = [
{
"id": "service_info_common",
"label": "Service fingerprinting",
"description": "Collect banner and version data for common network services.",
"category": "service",
"methods": [
"_service_info_80",
"_service_info_443",
"_service_info_8080",
"_service_info_21",
"_service_info_22",
"_service_info_23",
"_service_info_25",
"_service_info_53",
"_service_info_161",
"_service_info_445",
"_service_info_generic"
]
},
{
"id": "service_info_advanced",
"label": "TLS/SSL & database diagnostics",
"description": "Evaluate TLS configuration, database services, and industrial protocols.",
"category": "service",
"methods": [
"_service_info_tls",
"_service_info_1433",
"_service_info_3306",
"_service_info_3389",
"_service_info_5432",
"_service_info_5900",
"_service_info_6379",
"_service_info_9200",
"_service_info_11211",
"_service_info_27017",
"_service_info_502"
]
},
{
"id": "web_test_common",
"label": "Common exposure scan",
"description": "Probe default admin panels, disclosed files, and common misconfigurations.",
"category": "web",
"methods": [
"_web_test_common",
"_web_test_homepage",
"_web_test_flags",
"_web_test_graphql_introspection",
"_web_test_metadata_endpoints"
]
},
{
"id": "web_test_security_headers",
"label": "Security headers audit",
"description": "Check HSTS, CSP, X-Frame-Options, and other critical response headers.",
"category": "web",
"methods": [
"_web_test_security_headers",
"_web_test_cors_misconfiguration",
"_web_test_open_redirect",
"_web_test_http_methods"
]
},
{
"id": "web_test_vulnerability",
"label": "Vulnerability probes",
"description": "Non-destructive probes for common web vulnerabilities.",
"category": "web",
"methods": [
"_web_test_path_traversal",
"_web_test_xss",
"_web_test_sql_injection",
"_web_test_api_auth_bypass"
]
}
]

# Job status constants
JOB_STATUS_RUNNING = "RUNNING"
JOB_STATUS_SCHEDULED_FOR_STOP = "SCHEDULED_FOR_STOP"
JOB_STATUS_STOPPED = "STOPPED"
JOB_STATUS_FINALIZED = "FINALIZED"

# Run mode constants
RUN_MODE_SINGLEPASS = "SINGLEPASS"
RUN_MODE_CONTINUOUS_MONITORING = "CONTINUOUS_MONITORING"

# Distribution strategy constants
DISTRIBUTION_SLICE = "SLICE"
DISTRIBUTION_MIRROR = "MIRROR"

# Port order constants
PORT_ORDER_SHUFFLE = "SHUFFLE"
PORT_ORDER_SEQUENTIAL = "SEQUENTIAL"
Loading
Loading