Skip to content

digitalocean/gradient-adk

Repository files navigation

Header image for the DigitalOcean Gradient AI Agentic Cloud

DigitalOcean Gradient™ Agent Development Kit (ADK)

PyPI version Docs

The DigitalOcean Gradient™ Agent Development Kit (ADK) is a Python toolkit designed to help you build, deploy, and operate production-grade AI agents with zero infrastructure overhead.

Building AI agents is challenging enough without worrying about observability, evaluations, and deployment infrastructure. We built the Gradient™ ADK with one simple aim: bring your agent code, and we handle the rest—bringing the simplicity you love about DigitalOcean to AI agents.

Why Use DigitalOcean Gradient™ ADK?

  • Framework Agnostic: Bring your existing agent code—whether built with LangGraph, LangChain, CrewAI, PydanticAI, or any Python framework. No rewrites, no lock-in.

  • Pay Per Use: Only pay for what you use with serverless agent hosting. Currently provided at no compute cost during Public Preview!

  • Any LLM Provider: Use OpenAI, Anthropic, Google, or DigitalOcean's own Gradient™ AI serverless inference—your choice, your keys.

  • Built-in Observability: Get automatic traces, evaluations, and insights out of the box. No OpenTelemetry setup, no third-party integrations required.

  • Production Ready from Day One: Deploy with a single command to DigitalOcean's managed infrastructure. Focus on building your agent, not managing servers.

  • Seamless DigitalOcean Integration: Connect effortlessly to the DigitalOcean ecosystem—Knowledge Bases for RAG, Serverless Inference for LLMs, built-in Evaluations, and more.

Features

🛠️ CLI (Command Line Interface)

  • Local Development: Run and test your agents locally with hot-reload support
  • Seamless Deployment: Deploy agents to DigitalOcean with a single command
  • Evaluation Framework: Run comprehensive evaluations with custom metrics and datasets
  • Observability: View traces and runtime logs directly from the CLI

🚀 Runtime Environment

  • Framework Agnostic: Works with any Python framework for building AI agents
  • Automatic LangGraph Integration: Built-in trace capture for LangGraph nodes and state transitions
  • Custom Decorators: Capture traces from any framework using @trace decorators
  • Streaming Support: Full support for streaming responses with trace capture
  • Production Ready: Designed for seamless deployment to DigitalOcean infrastructure

Installation

pip install gradient-adk

Quick Start

🎥 Watch the Getting Started Video for a complete walkthrough

1. Initialize a New Agent Project

gradient agent init

This creates a new agent project with:

  • main.py - Agent entrypoint with example code
  • agents/ - Directory for agent implementations
  • tools/ - Directory for custom tools
  • config.yaml - Agent configuration
  • requirements.txt - Python dependencies

2. Run Locally

gradient agent run

Your agent will be available at http://localhost:8080 with automatic trace capture enabled.

3. Deploy to DigitalOcean

export DIGITALOCEAN_API_TOKEN=your_token_here
gradient agent deploy

4. Evaluate Your Agent

gradient agent evaluate \
  --test-case-name "my-evaluation" \
  --dataset-file evaluation_dataset.csv \
  --categories correctness,context_quality

Usage Examples

Using LangGraph (Automatic Trace Capture)

LangGraph agents automatically capture traces for all nodes and state transitions:

from gradient_adk import entrypoint, RequestContext
from langgraph.graph import StateGraph
from typing import TypedDict

class State(TypedDict):
    input: str
    output: str

async def llm_call(state: State) -> State:
    # This node execution is automatically traced
    response = await llm.ainvoke(state["input"])
    state["output"] = response
    return state

@entrypoint
async def main(input: dict, context: RequestContext):
    graph = StateGraph(State)
    graph.add_node("llm_call", llm_call)
    graph.set_entry_point("llm_call")

    graph = graph.compile()
    result = await graph.ainvoke({"input": input.get("query")})
    return result["output"]

Using Custom Decorators (Any Framework)

For frameworks beyond LangGraph, use trace decorators to capture custom spans:

from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever, RequestContext

@trace_retriever("vector_search")
async def search_knowledge_base(query: str):
    # Retriever spans capture search/lookup operations
    results = await vector_db.search(query)
    return results

@trace_llm("generate_response")
async def generate_response(prompt: str):
    # LLM spans capture model calls with token usage
    response = await llm.generate(prompt)
    return response

@trace_tool("calculate")
async def calculate(x: int, y: int):
    # Tool spans capture function execution
    return x + y

@entrypoint
async def main(input: dict, context: RequestContext):
    docs = await search_knowledge_base(input["query"])
    result = await calculate(5, 10)
    response = await generate_response(f"Context: {docs}")
    return response

Streaming Responses

The runtime supports streaming responses with automatic trace capture:

from gradient_adk import entrypoint, RequestContext

@entrypoint
async def main(input: dict, context: RequestContext):
    # Stream text chunks
    async def generate_chunks():
        async for chunk in llm.stream(input["query"]):
            yield chunk

CLI Commands

Agent Management

# Initialize new project
gradient agent init

# Configure existing project
gradient agent configure

# Run locally with hot-reload
gradient agent run --dev

# Deploy to DigitalOcean
gradient agent deploy

# View runtime logs
gradient agent logs

# Open traces UI
gradient agent traces

Evaluation

You can evaluate your deployed agent with a number of useful evaluation metrics. See the DigitalOcean docs for details on what belongs in a dataset.

# Run evaluation (interactive)
gradient agent evaluate

# Run evaluation (non-interactive)
gradient agent evaluate \
  --test-case-name "my-test" \
  --dataset-file data.csv \
  --categories correctness,safety_and_security \
  --star-metric-name "Correctness (general hallucinations)" \
  --success-threshold 80.0

Tracing

The ADK provides comprehensive tracing capabilities to capture and analyze your agent's execution. You can use decorators for wrapping functions or programmatic functions for manual span creation.

What Gets Traced Automatically

  • LangGraph Nodes: All node executions, state transitions, and edges (including LLM calls, tool calls, and DigitalOcean Knowledge Base calls)
  • HTTP Requests: Request/response payloads for LLM API calls
  • Errors: Full exception details and stack traces
  • Streaming Responses: Individual chunks and aggregated outputs

Tracing Decorators

Use decorators to automatically trace function executions:

from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever, RequestContext

@trace_llm("model_call")
async def call_model(prompt: str):
    """LLM spans capture model calls with token usage."""
    response = await llm.generate(prompt)
    return response

@trace_tool("calculator")
async def calculate(x: int, y: int):
    """Tool spans capture function/tool execution."""
    return x + y

@trace_retriever("vector_search")
async def search_docs(query: str):
    """Retriever spans capture search/lookup operations."""
    results = await vector_db.search(query)
    return results

@entrypoint
async def main(input: dict, context: RequestContext):
    docs = await search_docs(input["query"])
    result = await calculate(5, 10)
    response = await call_model(f"Context: {docs}")
    return response

Programmatic Span Functions

For more control over span creation, use the programmatic functions. These are useful when you can't use decorators or need to add spans for code you don't control:

from gradient_adk import entrypoint, add_llm_span, add_tool_span, add_agent_span, RequestContext

@entrypoint
async def main(input: dict, context: RequestContext):
    # Add an LLM span with detailed metadata
    response = await external_llm_call(input["query"])
    add_llm_span(
        name="external_llm_call",
        input={"messages": [{"role": "user", "content": input["query"]}]},
        output={"response": response},
        model="gpt-4",
        num_input_tokens=100,
        num_output_tokens=50,
        temperature=0.7,
    )

    # Add a tool span
    tool_result = await run_tool(input["data"])
    add_tool_span(
        name="data_processor",
        input={"data": input["data"]},
        output={"result": tool_result},
        tool_call_id="call_abc123",
        metadata={"tool_version": "1.0"},
    )

    # Add an agent span for sub-agent calls
    agent_result = await call_sub_agent(input["task"])
    add_agent_span(
        name="research_agent",
        input={"task": input["task"]},
        output={"result": agent_result},
        metadata={"agent_type": "research"},
        tags=["sub-agent", "research"],
    )

    return {"response": response, "tool_result": tool_result, "agent_result": agent_result}

Available Span Functions

Function Description Key Optional Fields
add_llm_span() Record LLM/model calls model, temperature, num_input_tokens, num_output_tokens, total_tokens, tools, time_to_first_token_ns
add_tool_span() Record tool/function executions tool_call_id
add_agent_span() Record agent/sub-agent executions

Common optional fields for all span functions: duration_ns, metadata, tags, status_code

Viewing Traces

Traces are:

  • Automatically sent to DigitalOcean's Gradient Platform
  • Available in real-time through the web console
  • Accessible via gradient agent traces command

Environment Variables

# Required for deployment and evaluations
export DIGITALOCEAN_API_TOKEN=your_do_api_token

# Required for Gradient serverless inference (if using)
export GRADIENT_MODEL_ACCESS_KEY=your_gradient_key

# Optional: Enable verbose trace logging
export GRADIENT_VERBOSE=1

# Optional: A2A protocol — base URL for AgentCard discovery
export A2A_BASE_URL=https://your-app.ondigitalocean.app

Project Structure

my-agent/
├── main.py                       # Agent entrypoint with @entrypoint decorator
├── .gradient/
│   ├── agent.yml                 # Agent configuration (auto-generated)
│   └── .gradientignore           # Controls which files are excluded from deployment
├── requirements.txt              # Python dependencies
├── .env                          # Environment variables (not committed)
├── agents/                       # Agent implementations
│   └── my_agent.py
└── tools/                        # Custom tools
    └── my_tool.py

Controlling Deployment Contents (.gradientignore)

When you deploy with gradient agent deploy, the CLI zips your project directory and uploads it. The file .gradient/.gradientignore controls which files and directories are excluded from that zip. It is created automatically with sensible defaults when you run gradient agent init.

The syntax is one pattern per line:

# Comments start with #
dir_name/     # Exclude directories with this name anywhere in the tree
*.ext         # Exclude files matching this extension
exact_name    # Exclude exact file or directory name matches

The default .gradientignore excludes virtual environments (env/, venv/, .venv/), Python caches (__pycache__/, *.pyc), version control (.git/), build artifacts (dist/, build/, *.egg-info), test caches (.pytest_cache/, .mypy_cache/), and zip files (*.zip).

To customize, edit .gradient/.gradientignore directly. For example, to also exclude a local test data directory:

# ... existing patterns ...
test_data/
scripts/

This is intentionally separate from .gitignore because that files you track in git (like setup scripts or test fixtures) may not be needed in your deployed agent.

Framework Compatibility

The Gradient ADK is designed to work with any Python-based AI agent framework:

  • LangGraph - Automatic trace capture (zero configuration)
  • LangChain - Use trace decorators (@trace_llm, @trace_tool, @trace_retriever) for custom spans
  • CrewAI - Use trace decorators for agent and task execution
  • Custom Frameworks - Use trace decorators for any function

A2A Protocol Support

The Gradient ADK supports the Agent-to-Agent (A2A) protocol v0.3.0, enabling any @entrypoint agent to communicate with A2A-compatible clients. Install with pip install gradient-adk[a2a].

Wrapping an Agent with A2A

Any @entrypoint agent can be exposed as an A2A server with no code changes:

from gradient_adk import entrypoint
from gradient_adk.a2a import create_a2a_server

@entrypoint
async def my_agent(data: dict, context) -> dict:
    return {"output": f"You said: {data.get('prompt', '')}"}

app = create_a2a_server(my_agent)

Run with uvicorn my_module:app --host 0.0.0.0 --port 8000. The agent is discoverable at /.well-known/agent-card.json and accepts JSON-RPC calls (message/send, tasks/get, tasks/cancel).

How the Protocol Works

A2A uses a discover-then-call pattern over JSON-RPC. Here is the full client-server flow:

  1. Discover — The client fetches the AgentCard at GET /.well-known/agent-card.json. This returns the agent's name, transport URL, supported capabilities, and input/output modes. The client uses this to decide whether it can talk to this agent.

  2. Send — The client sends a message via POST / with JSON-RPC method message/send. The server validates the message (text-only in MVP), creates a task, executes the agent, and returns a Task object with a taskId and current status.

  3. Poll — The client checks task progress via tasks/get with the taskId. Once the task reaches a terminal state (completed, failed, or canceled), the response includes the agent's output in the task artifacts. The historyLength parameter controls how much conversation history is returned.

  4. Cancel (optional) — The client can request cancellation via tasks/cancel. This is best-effort and idempotent — if the agent already finished, the cancel is a no-op.

Client                                 Server
  │                                      │
  ├── GET /.well-known/agent-card.json ──►  AgentCard (capabilities, URL)
  │                                      │
  ├── POST / message/send ──────────────►  Create task → Execute agent
  │◄─────────────────── Task {id, status} │
  │                                      │
  ├── POST / tasks/get ─────────────────►  Return task state + artifacts
  │◄──────────── Task {id, status, result} │
  │                                      │
  └── POST / tasks/cancel ──────────────►  Best-effort cancellation

Deploying to DigitalOcean App Platform

When you deploy to App Platform, the public URL is assigned after deployment. The A2A server needs this URL for the AgentCard so that clients know where to send requests. The workflow is:

  1. Deploy your agent to App Platform as usual with gradient agent deploy
  2. Get your app's public URL from the App Platform dashboard (e.g., https://your-agent-abc123.ondigitalocean.app)
  3. Set the environment variable in your app's settings:
    A2A_BASE_URL=https://your-agent-abc123.ondigitalocean.app
  4. Redeploy — the agent restarts and the AgentCard now advertises the correct public URL

For local development, no configuration is needed — it defaults to http://localhost:8000.

Calling a Remote A2A Agent from Another Agent

Once deployed, any A2A-compatible agent or client can call your agent:

import httpx

# Discover the remote agent
card = httpx.get("https://your-agent.ondigitalocean.app/.well-known/agent-card.json").json()
rpc_url = card["url"]

# Send a message
response = httpx.post(rpc_url, json={
    "jsonrpc": "2.0", "id": "1",
    "method": "message/send",
    "params": {
        "message": {
            "role": "user",
            "parts": [{"kind": "text", "text": "Hello from another agent!"}],
            "message_id": "msg-1",
            "kind": "message",
        }
    },
})
task = response.json()["result"]

# Poll until done
result = httpx.post(rpc_url, json={
    "jsonrpc": "2.0", "id": "2",
    "method": "tasks/get",
    "params": {"id": task["id"]},
}).json()["result"]

See examples/a2a/client.py for a complete async client with discovery, send, poll, and cancel.

Supported Operations

  • message/send: Send a message to the agent, creates or continues a task
  • tasks/get: Poll task state and retrieve results (supports historyLength)
  • tasks/cancel: Best-effort task cancellation (idempotent)
  • Agent Discovery: GET /.well-known/agent-card.json for capabilities and transport URL

Text-only input/output (text/plain) in the current release. Streaming, push notifications, and authenticated extended cards are explicitly disabled via AgentCard capability flags.

Support

License

Licensed under the Apache License 2.0. See LICENSE

About

The Gradient AI Platform Agent Development Kit and CLI.

Resources

License

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages