An AI-powered agent daemon for Tizen OS
Control your Tizen device through natural language β powered by multi-provider LLMs, containerized skill execution, and a web-based admin dashboard.
TizenClaw is a native C++ system daemon that brings LLM-based AI agent capabilities to Tizen devices. It receives natural language commands via multiple communication channels, interprets them through configurable LLM backends, and executes device-level actions using sandboxed Python skills inside OCI containers and the Tizen Action Framework.
TizenClaw is part of the Claw family of AI agent runtimes, each targeting different environments:
| TizenClaw | OpenClaw | NanoClaw | ZeroClaw | |
|---|---|---|---|---|
| Language | C++20 | TypeScript | TypeScript | Rust |
| Target | Tizen embedded | Cloud / Desktop | Container hosts | Edge hardware |
| Binary | ~812KB binary | Node.js runtime | Node.js runtime | ~8.8MB single binary |
| Channels | 7 | 22+ | 5 | 17 |
| LLM Backends | 5+ (extensible) | 4+ | 1 (Claude) | 5+ |
| Sandboxing | OCI (crun) | Docker | Docker | Docker |
| Unique | Tizen C-API, MCP | Canvas/A2UI, ClawHub | SKILL.md, AI-native | <5MB RAM, traits |
What makes TizenClaw different:
- π Native C++ Performance β Lower memory/CPU vs TypeScript/Node.js runtimes, ~812KB stripped native executable (armv7l), optimal for embedded devices
- π OCI Container Isolation β crun-based seccomp + namespace, finer syscall control than app-level sandboxing
- π± Direct Tizen C-API β ctypes wrappers for device hardware (battery, Wi-Fi, BT, display, volume, sensors, notifications, alarm, app management)
- π― Tizen Action Framework β Native integration with per-action LLM tools, MD schema caching, event-driven updates
- π€ Extensible LLM Backends β 5 built-in backends (Gemini, OpenAI, Anthropic, xAI, Ollama) with unified priority-based switching. Extend with unlimited additional backends via RPK plugins β no daemon recompilation required.
- π§© RPK Tool Distribution β Extend the skill ecosystem dynamically using Tizen Resource Packages (RPKs) bundling Python skills without daemon recompilation. Platform-signed RPK packages are automatically discovered and symlinked into the skills directory.
- π§ CLI Tool Plugins β Extend agent capabilities with native CLI tools packaged as TPKs. CLI tools run directly on the host for Tizen C-API access, with rich Markdown descriptors (
.tool.md) for LLM tool discovery. - π¦ Lightweight Deployment β systemd + RPM, standalone device execution without Node.js/Docker
- π§ Native MCP Server β C++ MCP server integrated into daemon, Claude Desktop controls Tizen via sdb
- π Health Monitoring β Built-in Prometheus-style metrics endpoint + live dashboard panel
- π OTA Updates β Over-the-air skill updates with version checking and rollback
To build TizenClaw, you need the Git Build System (GBS) and MIC. For Ubuntu, you can set up the apt repository and install the tools with the following commands:
echo "deb [trusted=yes] http://download.tizen.org/tools/latest-release/Ubuntu_$(lsb_release -rs)/ /" | \
sudo tee /etc/apt/sources.list.d/tizen.list > /dev/null
sudo apt update
sudo apt install gbs micFor detailed installation guides, refer to the official Tizen documentation.
The recommended way to build and deploy TizenClaw is using the included deploy.sh script. This script automatically handles building the core daemon, the RAG knowledge base, and deploying them to your connected device.
# Automated build + deploy to device
./deploy.sh
# To build and deploy with the secure tunnel dependency (ngrok):
./deploy.sh --with-ngrokOnce deployed, the deploy.sh script will automatically fetch the secure public URL from the local ngrok API (127.0.0.1:4040) and output the URL to access the Web Dashboard.
The build process generates the core RPM package:
tizenclaw: The core AI daemon, Action Framework bridge, and built-in skills.
A companion project, tizenclaw-assets, produces a separate RPM:
2. tizenclaw-assets: Consolidated ML/AI asset package β ONNX Runtime, pre-built RAG knowledge databases, embedding model (all-MiniLM-L6-v2), PaddleOCR PP-OCRv3 on-device OCR engine with CLI tool. Highly recommended for full AI capabilities.
Note:
deploy.shautomatically detects and buildstizenclaw-assetsif it exists at../tizenclaw-assets.
On-Device Dashboard (tizenclaw-webview)
TizenClaw also includes a companion Tizen web app (tizenclaw-webview) that provides direct on-device access to the Web Admin Dashboard.
If installed, you can launch the dashboard directly on the device screen:
# Launch the dashboard UI on the device
sdb shell app_launcher -s org.tizen.tizenclaw-webview __APP_SVC_URI__ "http://localhost:9090"Alternatively, access it from your development machine:
# Forward port and access dashboard from host
sdb forward tcp:9090 tcp:9090
open http://localhost:9090If you prefer to build and deploy manually, use the following commands:
# Build
gbs build -A x86_64 --include-all
# Manual deployment to device
sdb root on && sdb shell mount -o remount,rw /
sdb push ~/GBS-ROOT/local/repos/tizen/x86_64/RPMS/tizenclaw-1.0.0-1.x86_64.rpm /tmp/
sdb push ~/GBS-ROOT/local/repos/tizen/x86_64/RPMS/tizenclaw-assets-1.0.0-1.x86_64.rpm /tmp/
sdb shell rpm -Uvh --force /tmp/tizenclaw-1.0.0-1.x86_64.rpm /tmp/tizenclaw-assets-1.0.0-1.x86_64.rpm
# Start daemon
sdb shell systemctl daemon-reload
sdb shell systemctl restart tizenclaw- Standardized IPC (JSON-RPC 2.0) β Communicates with the
tizenclaw-cliand external clients over Unix Domain Sockets using standard JSON-RPC 2.0. - Aggressive Edge Memory Management β Monitors daemon idle states locally and dynamically flushes SQLite caches (
sqlite3_release_memory(50MB)) while aggressively reclaiming heap space back to Tizen OS (malloc_trim(0)) utilizing PSS profiling. - Unified LLM Priority Routing β Supports Gemini, OpenAI, Anthropic, xAI, Ollama, and runtime RPK Plugins via a unified queue, automatically falling back based strictly on assigned priority values (
1baseline). - 7 Communication Channels β Telegram, Slack, Discord, MCP (Claude Desktop), Webhook, Voice (TTS/STT), and Web Dashboard β all managed through a
Channelabstraction. - Function Calling / Tool Use β The LLM autonomously invokes device skills through an iterative Agentic Loop with streaming responses.
- Tizen Action Framework β Native device actions via
ActionBridgewith per-action typed LLM tools, MD schema caching, and live updates viaaction_event_cb. - OCI Container Isolation β Skills run inside a
cruncontainer with namespace isolation, limiting access to host resources. - Semantic Search (RAG) β On-device embedding (all-MiniLM-L6-v2 via ONNX Runtime) with SQLite vector store for LLM-independent knowledge retrieval. See ML/AI Assets.
- On-Device OCR β PaddleOCR PP-OCRv3 text detection and recognition via ONNX Runtime. Korean+English (lite, ~13MB) or CJK (full, ~84MB) model selectable at build time.
- Task Scheduler β Cron/interval/one-shot/weekly scheduled tasks with LLM integration and retry logic.
- Security β Encrypted API keys, tool execution policies with risk levels, structured audit logging, HMAC-SHA256 webhook auth.
- Web Admin Dashboard β Dark glassmorphism SPA on port 9090 with session monitoring, chat interface, config editor, and admin authentication.
- Multi-Agent β Supervisor agent pattern, skill pipelines, A2A protocol for cross-device agent collaboration.
- Session Persistence β Conversation history stored as Markdown with YAML frontmatter, surviving daemon restarts.
- Persistent Memory β Long-term, episodic, and session-scoped short-term memory with LLM tools (
remember,recall,forget). Configurable retention viamemory_config.json, idle-time summary regeneration, and automatic skill execution tracking. - Tool Schema Discovery β Embedded tool and Action Framework schemas stored as Markdown files under
/opt/usr/share/tizenclaw/tools/, automatically loaded into the LLM system prompt for precise tool invocation. - Health Monitoring β Prometheus-style
/api/metricsendpoint with live dashboard health panel (CPU, memory, uptime, request counts). - OTA Updates β Over-the-air skill updates via HTTP pull, version checking against remote manifest, and automatic rollback on failure.
TizenClaw uses a dual-container architecture powered by OCI-compliant runtimes (crun):
graph TB
subgraph Channels["Communication Channels"]
Telegram["Telegram"]
Slack["Slack"]
Discord["Discord"]
MCP["MCP (Claude Desktop)"]
Webhook["Webhook"]
Voice["Voice (STT/TTS)"]
WebUI["Web Dashboard"]
end
subgraph Daemon["TizenClaw Daemon (systemd)"]
ChannelReg["ChannelRegistry"]
IPC["IPC Server<br/>(JSON-RPC 2.0 via UDS)"]
AgentCore["AgentCore<br/>(Agentic Loop)"]
ContainerEngine["ContainerEngine"]
ActionBridge["ActionBridge<br/>(Tizen Action Framework Worker)"]
SessionStore["SessionStore"]
TaskScheduler["TaskScheduler"]
EmbeddingStore["EmbeddingStore (RAG)"]
WebDashboard["WebDashboard<br/>(libsoup, port 9090)"]
subgraph LLM["LlmBackend"]
Gemini["Gemini"]
OpenAI["OpenAI / xAI"]
Anthropic["Anthropic"]
Ollama["Ollama"]
end
ChannelReg --> IPC
IPC --> AgentCore
AgentCore --> LLM
AgentCore --> ContainerEngine
AgentCore --> ActionBridge
AgentCore --> SessionStore
AgentCore --> TaskScheduler
AgentCore --> EmbeddingStore
end
subgraph Secure["Secure Container (crun)"]
Skills["Python Skills<br/>(sandboxed)"]
SkillList["35+ Skills via Tizen C-API<br/>App Β· Device Β· Network Β· Media<br/>Display Β· Sensor Β· System Control<br/>+ Runtime Custom Skills (LLM-generated)<br/>Async support via tizen-core"]
Skills --- SkillList
end
subgraph ActionFW["Tizen Action Framework"]
ActionService["Action Service<br/>(on-demand)"]
ActionApps["Device-specific actions<br/>(auto-discovered)"]
ActionService --- ActionApps
end
Telegram & Slack & Discord & Voice --> ChannelReg
MCP --> IPC
Webhook & WebUI --> WebDashboard
ContainerEngine -- "crun exec" --> Skills
ActionBridge -- "action C API" --> ActionService
TizenClaw ships with 35 container skills (Python, OCI sandbox) and 10+ built-in tools (native C++). Async skills use the tizen-core event loop for callback-based APIs.
| Category | Skills | Examples |
|---|---|---|
| App Management | 5 | send_app_control, list_apps, terminate_app, get_package_info |
| Device Info & Sensors | 7 | get_device_info, get_sensor_data, get_thermal_info, get_runtime_info |
| Network & Connectivity | 6 | get_wifi_info, scan_wifi_networks β‘, scan_bluetooth_devices β‘, get_data_usage |
| Display & Hardware | 6 | control_display, control_volume, control_haptic, control_led |
| Media & Content | 5 | get_metadata, get_media_content, get_mime_type, get_sound_devices |
| System Actions | 6 | download_file β‘, send_notification, schedule_alarm, play_tone, web_search |
β‘ = Async skill using tizen-core event loop
- Built-in Tools:
execute_code,file_manager,create_task,list_tasks,cancel_task,create_session,list_sessions,send_to_session,ingest_document,search_knowledge,execute_action,action_<name>(per-action tools),remember,recall,forget(persistent memory),execute_cli(CLI tool plugins) - Tool Dispatch:
std::unordered_map<string, ToolHandler>for O(1) dispatch withstarts_withfallback for dynamically named tools (e.g.,action_*)
π Full reference: Tools Reference
Actions registered via the Tizen Action Framework are automatically discovered and exposed as per-action LLM tools (e.g., action_<name>). Schema files are cached as Markdown and kept in sync via action_event_cb events. Available actions vary by device.
TizenClaw supports dynamically injecting Python skills via platform-signed RPK (Resource Package) packages. When an RPK with the skill metadata key is installed through the Tizen package manager, SkillPluginManager automatically creates symbolic links from the RPK's lib/<skill_name>/ directories into the TizenClaw skills directory, triggering a hot-reload.
RPK Structure:
lib/
βββ get_sample_info/
β βββ manifest.json # Skill schema (name, description, parameters)
β βββ skill.py # Python implementation
βββ get_sample_status/
βββ manifest.json
βββ skill.py
Metadata Declaration (tizen-manifest.xml):
<metadata key="http://tizen.org/metadata/tizenclaw/skill"
value="get_sample_info|get_sample_status"/>Note: Only packages signed with a platform-level certificate are allowed to register skills. Multiple skills can be declared using
|as a delimiter or via multiple<metadata>entries.
π¦ Sample project: tizenclaw-skill-plugin-sample
TizenClaw also supports native CLI tool plugins packaged as TPKs (Tizen Package). Unlike containerized Python skills, CLI tools run directly on the host to access Tizen C-APIs without sandbox overhead. Each CLI tool includes a rich Markdown descriptor (.tool.md) that enables the LLM to understand commands, subcommands, arguments, and output formats.
TPK Structure:
bin/
βββ get_package_info # Native CLI executable
βββ get_app_info # Native CLI executable
res/
βββ get_package_info.tool.md # LLM tool descriptor
βββ get_app_info.tool.md # LLM tool descriptor
Metadata Declaration (tizen-manifest.xml):
<service-application appid="org.tizen.sample.get_package_info"
exec="get_package_info" type="capp">
<metadata key="http://tizen.org/metadata/tizenclaw/cli"
value="get_package_info"/>
</service-application>CliPluginManager discovers installed CLI TPKs via pkgmgrinfo_appinfo_metadata_filter, creates symlinks for the executable and .tool.md descriptor into /opt/usr/share/tizenclaw/tools/cli/, and injects the tool documentation into the LLM system prompt. The LLM invokes CLI tools through the execute_cli built-in tool.
Note: Only packages signed with a platform-level certificate are allowed to register CLI tools. The
tizenclaw-metadata-cli-plugin.soparser plugin enforces this at install time.
π¦ Sample project: tizenclaw-cli-plugin-sample
TizenClaw includes a default multi-agent system designed to transition from a single monolithic agent toward a highly decentralized 11 MVP Agent Set for robust device operation:
| Category | MVP Agent | Role |
|---|---|---|
| Understanding | Input Understanding | Parses raw user input across channels to determine intent |
| Perception | Environment Perception | Consolidates device/system/sensor schemas via event bus |
| Memory | Session / Context | Manages short, long-term, and episodic memory Retrieval |
| Planning | Planning / Decision | Orchestrator: Analyzes requests, decomposes goals, plans steps |
| Execution | Action Execution | Executes planned skills via ContainerEngine & Action Framework |
| Protection | Policy / Safety | Enforces tool policy, risk levels, and system safeguards |
| Monitoring | Health Monitoring | Tracks metrics (CPU, Memory, uptime, RPK states) |
| Recovery | Recovers from failures, missing context, and rate limits | |
| Logging / Trace | Manages structured audit logs and execution traces | |
| Utility | Knowledge Retrieval | Manages RAG embeddings and semantic document search |
| Skill Manager | (Legacy) Creates/updates custom Python skills at runtime |
Agents communicate via create_session / send_to_session and are defined in config/agent_roles.json.
To transition towards this robust multi-agent ecosystem, TizenClaw utilizes a dedicated perception layer focusing on:
- Common State Schemas: Strict JSON structures (
DeviceState,UserState,TaskState, etc.) - Capability Registries: Defined boundaries of what loaded RPK tools and built-in skills can achieve.
- Event-Driven Bus: Overcoming polling limits by reacting to
sensor.changed,app.started, etc.
- Tizen 10.0 or later target device / emulator
- crun OCI runtime (built from source during RPM packaging)
- Required Tizen packages:
tizen-core,glib-2.0,dlog,libcurl,libsoup-3.0,libwebsockets,sqlite3,capi-appfw-tizen-action
TizenClaw uses the Tizen GBS build system. The default target is x86_64 (emulator), but it also supports armv7l and aarch64 for real devices:
# x86_64 (emulator, default)
gbs build -A x86_64 --include-all
# armv7l (32-bit ARM devices)
gbs build -A armv7l --include-all
# aarch64 (64-bit ARM devices)
gbs build -A aarch64 --include-allFor subsequent builds (after initial):
gbs build -A x86_64 --include-all --noinitThe build system automatically selects the correct rootfs image from data/img/<arch>/rootfs.tar.gz based on the target architecture.
RPM output:
~/GBS-ROOT/local/repos/tizen/<arch>/RPMS/tizenclaw-1.0.0-1.<arch>.rpm
For the ML/AI assets companion package, build separately from ../tizenclaw-assets:
cd ../tizenclaw-assets && gbs build -A x86_64 --include-all
# For CJK full OCR model (default is lite Korean+English):
cd ../tizenclaw-assets && gbs build -A x86_64 --include-all --define "ocr_model full"~/GBS-ROOT/local/repos/tizen/<arch>/RPMS/tizenclaw-assets-1.0.0-1.<arch>.rpm
Unit tests are automatically executed during the build via %check.
Deploy to a Tizen emulator or device over sdb:
# Enable root and remount filesystem
sdb root on
sdb shell mount -o remount,rw /
# Push and install TizenClaw and the optional RAG Database RPMs
# Push and install TizenClaw
sdb push ~/GBS-ROOT/local/repos/tizen/x86_64/RPMS/tizenclaw-1.0.0-1.x86_64.rpm /tmp/
sdb shell rpm -Uvh --force /tmp/tizenclaw-1.0.0-1.x86_64.rpm
# Push and install Assets (built from tizenclaw-assets project)
sdb push ~/GBS-ROOT/local/repos/tizen/x86_64/RPMS/tizenclaw-assets-1.0.0-1.x86_64.rpm /tmp/
sdb shell rpm -Uvh --force /tmp/tizenclaw-assets-1.0.0-1.x86_64.rpm
# Restart the daemon
sdb shell systemctl daemon-reload
sdb shell systemctl restart tizenclaw
sdb shell systemctl status tizenclaw -lTizenClaw reads its configuration from /opt/usr/share/tizenclaw/ on the device. All configuration files can be edited via the Web Admin Dashboard (port 9090).
| Config File | Purpose |
|---|---|
llm_config.json |
LLM backend selection, API keys, model settings, fallback order |
telegram_config.json |
Telegram bot token and allowed chat IDs |
slack_config.json |
Slack app/bot tokens and channel lists |
discord_config.json |
Discord bot token and guild/channel allowlists |
webhook_config.json |
Webhook route mapping and HMAC secrets |
tool_policy.json |
Tool execution policy (max iterations, blocked skills, risk overrides) |
agent_roles.json |
Agent roles and specialized system prompts |
memory_config.json |
Memory retention periods, size limits, and summary parameters |
{
"active_backend": "gemini",
"fallback_backends": ["openai", "ollama"],
"backends": {
"gemini": {
"api_key": "YOUR_API_KEY",
"model": "gemini-2.5-flash"
},
"openai": {
"api_key": "YOUR_API_KEY",
"model": "gpt-4o",
"endpoint": "https://api.openai.com/v1"
},
"anthropic": {
"api_key": "YOUR_API_KEY",
"model": "claude-sonnet-4-20250514"
},
"ollama": {
"model": "llama3",
"endpoint": "http://localhost:11434"
}
}
}Sample configuration files are included in data/.
tizenclaw/
βββ src/
β βββ common/ # Logging, shared utilities
β βββ tizenclaw/ # Daemon core
β βββ core/ # Main daemon, agent loop, tool policy
β β βββ tizenclaw.cc # Daemon entry, IPC server
β β βββ agent_core.cc # Agentic Loop, streaming
β β βββ action_bridge.cc # Tizen Action Framework bridge
β β βββ tool_policy.cc # Risk-level tool policy
β β βββ skill_watcher.cc # inotify skill hot-reload
β βββ llm/ # LLM backend providers
β β βββ llm_backend.hh # Unified LLM interface
β β βββ gemini_backend.cc # Google Gemini
β β βββ openai_backend.cc # OpenAI / xAI
β β βββ anthropic_backend.cc # Anthropic
β β βββ ollama_backend.cc # Ollama (local)
β βββ channel/ # Communication channels
β β βββ channel.hh # Channel interface
β β βββ channel_registry.cc# Lifecycle management
β β βββ telegram_client.cc # Telegram Bot API
β β βββ slack_channel.cc # Slack (WebSocket)
β β βββ discord_channel.cc # Discord (WebSocket)
β β βββ mcp_server.cc # MCP (JSON-RPC 2.0)
β β βββ webhook_channel.cc # Webhook HTTP
β β βββ voice_channel.cc # Tizen STT/TTS
β β βββ web_dashboard.cc # Admin SPA (port 9090)
β βββ storage/ # Data persistence
β β βββ session_store.cc # Markdown sessions
β β βββ memory_store.cc # Persistent memory (long-term, episodic, short-term)
β β βββ embedding_store.cc # SQLite RAG vectors
β β βββ audit_logger.cc # Audit logging
β βββ infra/ # Infrastructure
β β βββ http_client.cc # libcurl HTTP wrapper
β β βββ key_store.cc # Encrypted API keys
β β βββ container_engine.cc# OCI container (crun)
β β βββ health_monitor.cc # Prometheus-style metrics
β β βββ ota_updater.cc # OTA skill updates
β βββ orchestrator/ # Multi-agent orchestration
β β βββ supervisor_engine.cc # Supervisor agent pattern
β β βββ pipeline_executor.cc # Skill pipeline engine
β β βββ a2a_handler.cc # A2A protocol
β βββ scheduler/ # Task automation
β βββ task_scheduler.cc # Cron/interval tasks
βββ tools/skills/ # Python skill scripts
βββ tools/embedded/ # Embedded tool MD schemas (13 files)
βββ tools/cli/ # CLI tools (aurum-cli + plugin symlinks from TPKs)
βββ scripts/ # Container setup, CI, hooks
βββ test/unit_tests/ # Google Test unit tests
βββ data/ # Config samples, rootfs, web SPA
βββ packaging/ # RPM spec, systemd services
βββ docs/ # Design, Analysis, Roadmap
βββ LICENSE # Apache License 2.0
βββ CMakeLists.txt
- tizenclaw-assets: Consolidated ML/AI asset package β ONNX Runtime, pre-built RAG knowledge databases, embedding model, PaddleOCR PP-OCRv3 on-device OCR engine with CLI tool for Korean+English/CJK text recognition.
- tizenclaw-webview: A companion Tizen web application that provides an on-device Web Admin Dashboard for TizenClaw.
- tizenclaw-llm-plugin-sample: A sample project demonstrating how to build an RPM to RPK (Resource Package) plugin to dynamically inject new customized LLM backends into TizenClaw at runtime.
- tizenclaw-skill-plugin-sample: A sample project demonstrating how to build an RPK skill plugin to dynamically inject Python skills into TizenClaw via platform-signed packages.
- tizenclaw-cli-plugin-sample: A sample project demonstrating how to build a TPK CLI tool plugin providing native device query tools (
get_package_info,get_app_info) with LLM-readable Markdown descriptors.
This project is licensed under the Apache License 2.0.
Copyright 2024-2026 Samsung Electronics Co., Ltd.
