Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 23 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

**Example applications for [dstack](https://github.com/Dstack-TEE/dstack) - Deploy containerized apps to TEEs with end-to-end security in minutes**

[Getting Started](#getting-started) • [Use Cases](#use-cases) • [Core Patterns](#core-patterns) • [Dev Tools](#dev-scaffolding) • [Starter Packs](#starter-packs) • [Other Use Cases](#other-use-cases)
[Getting Started](#getting-started) • [Confidential AI](#confidential-ai) • [Tutorials](#tutorials) • [Use Cases](#use-cases) • [Core Patterns](#core-patterns) • [Dev Tools](#dev-scaffolding) • [Starter Packs](#starter-packs)

</div>

Expand Down Expand Up @@ -44,7 +44,7 @@ phala simulator start
### Run an Example Locally

```bash
cd tutorial/01-attestation-oracle
cd tutorial/01-attestation
docker compose run --rm \
-v ~/.phala-cloud/simulator/0.5.3/dstack.sock:/var/run/dstack.sock \
app
Expand All @@ -57,7 +57,23 @@ phala auth login
phala deploy -n my-app -c docker-compose.yaml
```

See [Phala Cloud](https://cloud.phala.network) for production TEE deployment.
See [Phala Cloud](https://cloud.phala.com) for production TEE deployment.

---

## Confidential AI

Run AI workloads where prompts, model weights, and inference stay encrypted in hardware.

| Example | Description |
|---------|-------------|
| [confidential-ai/inference](./confidential-ai/inference) | Private LLM inference with vLLM on Confidential GPU |
| [confidential-ai/training](./confidential-ai/training) | Confidential fine-tuning on sensitive data using Unsloth |
| [confidential-ai/agents](./confidential-ai/agents) | Secure AI agent with TEE-derived wallet keys using LangChain and Confidential AI models |

GPU deployments require: `--instance-type h200.small --region US-EAST-1 --image dstack-nvidia-dev-0.5.4.1`

See [Confidential AI Guide](https://github.com/Dstack-TEE/dstack/blob/master/docs/confidential-ai.md) for concepts and security model.

---

Expand All @@ -67,10 +83,10 @@ Step-by-step guides covering core dstack concepts.

| Tutorial | Description |
|----------|-------------|
| [01-attestation-oracle](./tutorial/01-attestation-oracle) | Use the guest SDK to work with attestations directly — build an oracle, bind data to TDX quotes via `report_data`, verify with local scripts |
| [02-persistence-and-kms](./tutorial/02-persistence-and-kms) | Use `getKey()` for deterministic key derivation from a KMS — persistent wallets, same key across restarts |
| [03-gateway-and-ingress](./tutorial/03-gateway-and-ingress) | Custom domains with automatic SSL, certificate evidence chain |
| [04-upgrades](./tutorial/04-upgrades) | Extend `AppAuth.sol` with custom authorization logic — NFT-gated clusters, on-chain governance |
| [01-attestation](./tutorial/01-attestation) | Build an oracle, bind data to TDX quotes via `report_data`, verify with local scripts |
| [02-kms-and-signing](./tutorial/02-kms-and-signing) | Deterministic key derivation from KMS — persistent wallets, same key across restarts |
| [03-gateway-and-tls](./tutorial/03-gateway-and-tls) | Custom domains with automatic SSL, certificate evidence chain |
| [04-onchain-oracle](./tutorial/04-onchain-oracle) | AppAuth contracts, on-chain signature verification, multi-device deployment |

---

Expand Down Expand Up @@ -120,15 +136,6 @@ TLS termination, custom domains, external connectivity.
| Example | Description |
|---------|-------------|
| [dstack-ingress](./custom-domain/dstack-ingress) | **Complete ingress solution** — auto SSL via Let's Encrypt, multi-domain, DNS validation, evidence generation with TDX quote chain |
| [custom-domain](./custom-domain/custom-domain) | Simpler custom domain setup via zt-https |

### Keys & Persistence

Persistent keys across deployments via KMS.

| Example | Description | Status |
|---------|-------------|--------|
| [get-key-basic](./get-key-basic) | `dstack.get_key()` — same key identity across machines | Coming Soon |

### On-Chain Interaction

Expand Down
23 changes: 23 additions & 0 deletions confidential-ai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Confidential AI Examples

Run AI workloads with hardware-enforced privacy. Your prompts, model weights, and computations stay encrypted in memory.

| Example | Description | Status |
|---------|-------------|--------|
| [inference](./inference) | Private LLM with response signing | Ready to deploy |
| [training](./training) | Fine-tuning on sensitive data | Requires local build |
| [agents](./agents) | AI agent with TEE-derived keys | Requires local build |

Start with inference—it deploys in one command and shows the full attestation flow.

```bash
cd inference
phala auth login
phala deploy -n my-llm -c docker-compose.yaml \
--instance-type h200.small \
-e TOKEN=your-secret-token
```

First deployment takes 10-15 minutes (large images + model loading). Check progress with `phala cvms serial-logs <app_id> --tail 100`.

See the [Confidential AI Guide](https://github.com/Dstack-TEE/dstack/blob/master/docs/confidential-ai.md) for how the security model works.
12 changes: 12 additions & 0 deletions confidential-ai/agents/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY agent.py .

EXPOSE 8080

CMD ["python", "agent.py"]
91 changes: 91 additions & 0 deletions confidential-ai/agents/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Secure AI Agent

Run AI agents with TEE-derived wallet keys. The agent calls a confidential LLM (redpill.ai), so prompts never leave encrypted memory.

## Quick Start

```bash
phala auth login
phala deploy -n my-agent -c docker-compose.yaml \
-e LLM_API_KEY=your-redpill-key
```

Your API key is encrypted client-side and only decrypted inside the TEE.

Test it:

```bash
# Get agent info and wallet address
curl https://<endpoint>/

# Chat with the agent
curl -X POST https://<endpoint>/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is your wallet address?"}'

# Sign a message
curl -X POST https://<endpoint>/sign \
-H "Content-Type: application/json" \
-d '{"message": "Hello from TEE"}'
```

## How It Works

```mermaid
graph TB
User -->|TLS| Agent
subgraph TEE1[Agent CVM]
Agent[Agent Code]
Agent --> Wallet[TEE-derived wallet]
end
Agent -->|TLS| LLM
subgraph TEE2[LLM CVM]
LLM[redpill.ai]
end
```

The agent derives an Ethereum wallet from TEE keys:

```python
from dstack_sdk import DstackClient
from dstack_sdk.ethereum import to_account

client = DstackClient()
eth_key = client.get_key("agent/wallet", "mainnet")
account = to_account(eth_key)
# Same path = same key, even across restarts
```

Both the agent and the LLM run in separate TEEs. User queries stay encrypted from browser to agent to LLM and back.

## API

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/` | GET | Agent info, wallet address, TCB info |
| `/attestation` | GET | TEE attestation quote |
| `/chat` | POST | Chat with the agent |
| `/sign` | POST | Sign a message with agent's wallet |

## Using a Different LLM

The agent uses redpill.ai by default for end-to-end confidentiality. To use a different OpenAI-compatible endpoint:

```bash
phala deploy -n my-agent -c docker-compose.yaml \
-e LLM_BASE_URL=https://api.openai.com/v1 \
-e LLM_API_KEY=sk-xxxxx
```

Note: Using a non-confidential LLM means prompts leave the encrypted environment.

## Cleanup

```bash
phala cvms delete my-agent --force
```

## Further Reading

- [Confidential AI Guide](https://github.com/Dstack-TEE/dstack/blob/master/docs/confidential-ai.md)
- [dstack Python SDK](https://github.com/Dstack-TEE/dstack/tree/master/sdk/python)
Loading
Loading