Skip to content

nashtech-garage/adk-csharp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

95 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NTG.Adk - Agent Development Kit for .NET

License .NET A.D.D V3

Production-ready C# implementation of Google's Agent Development Kit with 99% feature parity

NTG.Adk is a complete C# port of Google ADK Python, following strict Abstract Driven Development (A.D.D) V3 architecture principles for enterprise-grade agent systems.

✨ Key Features

  • πŸ—οΈ A.D.D V3 Architecture - Five-layer fractal design with zero coupling
  • πŸ€– Multi-Agent Orchestration - Sequential, parallel, and loop workflows
  • πŸ”„ Session Management - Multi-user with app/user/session state hierarchy
  • πŸ’Ύ Artifact & Memory Services - File storage and long-term agent memory
  • βš™οΈ RunConfig - Configurable limits and streaming (MaxLlmCalls: 500, StreamingMode)
  • πŸ“‘ Token Streaming - Real-time SSE streaming with partial event detection
  • 🌐 A2A Protocol - Seamless interoperability with Google Agent ecosystem
  • πŸ”Œ MCP Protocol - Connect to MCP servers and use their tools (stdio, SSE, HTTP)
  • 🌐 OpenAPI Toolset - Auto-generate tools from any REST API (JSON/YAML specs)
  • πŸš€ Runner Pattern - Production-ready orchestration with integrated services
  • 🧩 LLM Adapters - Gemini, OpenAI, and OpenAI-compatible endpoints (Ollama, LocalAI, vLLM)
  • πŸ› οΈ Tool Ecosystem - Function calling, custom tools, and built-in tools (Google Search, Code Execution)

πŸ“Š Status

Version: 1.8.17 Target Framework: .NET 10.0 Production Readiness: 100% βœ… Core Feature Parity with Python ADK: 99% βœ… (Retrieval/RAG: 50%) A2A Interoperability: 100% βœ… MCP Protocol Support: 100% βœ… OpenAPI Toolset: 100% βœ…

See docs/STATUS.md for detailed metrics.

⚑ Quick Start

Basic Agent

using NTG.Adk.Implementations.Models;
using NTG.Adk.Operators.Agents;
using NTG.Adk.Operators.Runners;

// Create agent with LLM
var llm = new GeminiLlm("gemini-2.0-flash-exp");
var agent = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "Assistant",
    Instruction = "You are a helpful assistant"
};

// Run with InMemoryRunner
var runner = new InMemoryRunner(agent, appName: "MyApp");

await foreach (var evt in runner.RunAsync("user001", "session001", "Hello!"))
{
    if (evt.Content?.Parts != null)
    {
        foreach (var part in evt.Content.Parts)
        {
            if (part.Text != null)
                Console.WriteLine($"[{evt.Author}] {part.Text}");
        }
    }
}

OpenAI-Compatible Endpoints (Ollama, LocalAI, vLLM)

using NTG.Adk.Implementations.Models;
using NTG.Adk.Operators.Agents;
using NTG.Adk.Operators.Runners;

// Use Ollama local models
var llm = new OpenAILlm(
    modelName: "llama3",
    apiKey: "ollama",  // Any string works
    endpoint: new Uri("http://localhost:11434/v1")
);

var agent = new LlmAgent(llm, "llama3")
{
    Name = "LocalAssistant",
    Instruction = "You are a helpful assistant running locally"
};

var runner = new InMemoryRunner(agent, appName: "LocalApp");
await foreach (var evt in runner.RunAsync("user001", "session001", "Hello!"))
{
    // Handle events
}

Supports: Ollama, LocalAI, vLLM, LM Studio, and any OpenAI-compatible endpoint.

Multi-Agent Workflow

using NTG.Adk.Operators.Workflows;

// Define agents
var validator = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "Validator",
    Instruction = "Validate input data",
    OutputKey = "validation"
};

var processor = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "Processor",
    Instruction = "Process validated data",
    OutputKey = "result"
};

// Sequential pipeline
var pipeline = new SequentialAgent("DataPipeline", [validator, processor]);

var runner = new InMemoryRunner(pipeline, appName: "PipelineApp");
await foreach (var evt in runner.RunAsync("user001", "session001", "Process this data"))
{
    // Handle events
}

A2A Interoperability

using NTG.Adk.Operators.A2A;

// Create ADK agent
var agent = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "A2AAgent",
    Instruction = "Answer questions via A2A protocol"
};

var runner = new InMemoryRunner(agent, appName: "A2AApp");

// Wrap with A2A executor
var a2aExecutor = new A2aAgentExecutor(runner);

// Handle A2A messages
var a2aMessage = new A2A.AgentMessage
{
    MessageId = Guid.NewGuid().ToString(),
    Role = A2A.MessageRole.User,
    Parts = [new A2A.TextPart { Text = "Hello from A2A!" }]
};

await foreach (var a2aEvent in a2aExecutor.ExecuteAsync(
    a2aMessage,
    taskId: Guid.NewGuid().ToString(),
    contextId: "ADK/A2AApp/user001/session001"))
{
    // Handle A2A events (TaskStatusUpdateEvent, TaskArtifactUpdateEvent)
}

MCP Protocol Integration

using NTG.Adk.Boundary.Mcp;
using NTG.Adk.Implementations.Mcp;

// Connect to MCP server via stdio
var connectionParams = new StdioConnectionParams
{
    Command = "npx",
    Arguments = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
};

var mcpToolset = new McpToolset(connectionParams);

// Connect and get tools
await mcpToolset.ConnectAsync();
var tools = await mcpToolset.GetToolsAsync();

// Use MCP tools with agent
var agent = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "McpAssistant",
    Instruction = "You have access to MCP tools",
    Tools = tools.ToList()
};

var runner = new InMemoryRunner(agent, appName: "McpApp");
await foreach (var evt in runner.RunAsync("user001", "session001", "List files"))
{
    // Handle events
}

OpenAPI Toolset

using NTG.Adk.Implementations.Tools.OpenApi;
using NTG.Adk.Boundary.Tools.Auth;

// Load OpenAPI spec (JSON or YAML)
var openApiSpec = File.ReadAllText("petstore-openapi.json");

// Create toolset from spec
var toolset = new OpenAPIToolset(openApiSpec, "json");

// Optional: Add authentication
var authScheme = new ApiKeyAuthScheme
{
    In = "header",
    Name = "X-API-Key"
};
var authCredential = new ApiKeyCredential
{
    ApiKey = Environment.GetEnvironmentVariable("API_KEY")!
};
toolset = new OpenAPIToolset(openApiSpec, "json", authScheme, authCredential);

// Get all tools from spec
var tools = toolset.GetTools();

// Use with agent
var agent = new LlmAgent(llm, "gemini-2.0-flash-exp")
{
    Name = "ApiAgent",
    Instruction = "You can interact with the API using these tools",
    Tools = tools
};

var runner = new InMemoryRunner(agent, appName: "ApiApp");
await foreach (var evt in runner.RunAsync("user001", "session001", "List all pets"))
{
    // Handle events
}

βš™οΈ Advanced Configuration

RunConfig - Execution Control

Configure agent execution limits and streaming (matches Python ADK):

using NTG.Adk.CoreAbstractions.Agents;

// Default configuration (matches Python ADK defaults)
var runConfig = new RunConfig
{
    MaxLlmCalls = 500,                  // Max LLM calls per invocation (prevents infinite loops)
    StreamingMode = StreamingMode.None  // No streaming by default
};

var runner = new Runner(agent, "MyApp", sessionService, runConfig: runConfig);

Token Streaming

Enable real-time token-by-token streaming:

// Enable SSE streaming
var runConfig = new RunConfig
{
    StreamingMode = StreamingMode.Sse  // Server-sent events streaming
};

var runner = new Runner(agent, "MyApp", sessionService, runConfig: runConfig);

await foreach (var evt in runner.RunAsync("user001", "session001", "Hello"))
{
    if (evt.Partial)
    {
        // Streaming chunk - arrives in real-time as tokens are generated
        Console.Write(evt.Content?.Parts?.FirstOrDefault()?.Text ?? "");
    }
    else
    {
        // Complete response
        Console.WriteLine("\n[Complete]");
    }
}

Streaming Modes:

  • StreamingMode.None - Buffer complete response (default, matches Python ADK)
  • StreamingMode.Sse - Server-sent events, token-by-token streaming
  • StreamingMode.Bidi - Bidirectional streaming (reserved for future use)

LLM Call Limits

Prevent infinite loops with configurable limits:

var runConfig = new RunConfig
{
    MaxLlmCalls = 100  // Custom limit
};

try
{
    await foreach (var evt in runner.RunAsync("user001", "session001", "Complex task"))
    {
        // Process events
    }
}
catch (LlmCallsLimitExceededError ex)
{
    Console.WriteLine($"Limit exceeded: {ex.Message}");
    // Output: "Max number of LLM calls limit of 100 exceeded"
}

Default: 500 calls per invocation (matches Python ADK)

πŸ“š Documentation

πŸ—οΈ Architecture Overview

NTG.Adk follows A.D.D V3 strict five-layer architecture:

NTG.Adk/
β”œβ”€β”€ Boundary/              # Layer 1: DTOs, Events (no dependencies)
β”œβ”€β”€ CoreAbstractions/      # Layer 2: Interfaces/Ports (no dependencies)
β”œβ”€β”€ Implementations/       # Layer 3: Adapters (depends on CoreAbstractions)
β”œβ”€β”€ Operators/             # Layer 4: Business Logic (depends on CoreAbstractions + Boundary)
└── Bootstrap/             # Layer 5: Composition Root (depends on all)

Key Principles:

  • βœ… Operators call ports (interfaces), never implementations
  • βœ… Zero coupling between layers (except explicit dependencies)
  • βœ… Dependency inversion at all boundaries
  • βœ… Technology-agnostic core abstractions

See docs/ARCHITECTURE.md for details.

πŸ”— Python ADK Compatibility

NTG.Adk maintains 99% feature parity with Google ADK Python (100% core features):

Python ADK C# NTG.Adk Layer
google.adk.agents.BaseAgent IAgent Port (CoreAbstractions)
google.adk.agents.LlmAgent LlmAgent Operator
google.adk.runners.Runner Runner Operator
google.adk.events.Event Event Boundary DTO
google.adk.tools.BaseTool ITool Port (CoreAbstractions)

See docs/COMPATIBILITY.md for complete mapping.

πŸ“¦ Project Structure

E:\repos\adk-csharp/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ NTG.Adk.Boundary/           # Layer 1: DTOs
β”‚   β”œβ”€β”€ NTG.Adk.CoreAbstractions/   # Layer 2: Ports
β”‚   β”œβ”€β”€ NTG.Adk.Implementations/    # Layer 3: Adapters
β”‚   β”œβ”€β”€ NTG.Adk.Operators/          # Layer 4: Orchestration
β”‚   └── NTG.Adk.Bootstrap/          # Layer 5: DI/Entry
β”œβ”€β”€ samples/
β”‚   β”œβ”€β”€ HelloWorldAgent/            # Basic agent demo
β”‚   β”œβ”€β”€ GeminiAgent/                # Gemini LLM integration
β”‚   β”œβ”€β”€ OpenAIAgent/                # OpenAI integration
β”‚   β”œβ”€β”€ AutoFlowAgent/              # AutoFlow orchestration
β”‚   β”œβ”€β”€ StoryFlowAgent/             # Multi-agent workflow
β”‚   β”œβ”€β”€ A2AInteropSample/           # A2A protocol demo
β”‚   β”œβ”€β”€ McpToolsSample/             # MCP Protocol integration
β”‚   β”œβ”€β”€ OpenApiToolsSample/         # OpenAPI Toolset demo
β”‚   └── BuiltInToolsSample/         # Built-in tools demo
β”œβ”€β”€ docs/                           # Documentation
└── README.md                       # This file

πŸ§ͺ Samples

Explore working examples in the samples/ directory:

  1. HelloWorldAgent - Simple echo agent with InMemoryRunner
  2. GeminiAgent - Google Gemini 2.0 Flash integration
  3. OpenAIAgent - OpenAI GPT-4 integration
  4. AutoFlowAgent - Dynamic multi-agent routing
  5. StoryFlowAgent - Sequential story generation workflow
  6. A2AInteropSample - A2A protocol interoperability
  7. McpToolsSample - MCP Protocol integration (stdio, SSE, HTTP transports)
  8. OpenApiToolsSample - OpenAPI Toolset with REST API integration
  9. BuiltInToolsSample - Built-in tools (Google Search, Code Execution)

Run a sample:

cd samples/HelloWorldAgent
dotnet run

πŸ”§ Requirements

  • .NET 10.0
  • C# 12 language features
  • Visual Studio 2022 or VS Code with C# Dev Kit

πŸ› οΈ Build

# Clone repository
git clone <repository-url>
cd adk-csharp

# Restore packages
dotnet restore

# Build solution
dotnet build

# Run tests (if available)
dotnet test

# Run a sample
cd samples/HelloWorldAgent
dotnet run

πŸ—ΊοΈ Roadmap

Phase 1 - Core Infrastructure βœ… COMPLETE

  • βœ… Multi-Agent Orchestration (Sequential, Parallel, Loop)
  • βœ… Session/Artifact/Memory Services (In-Memory)
  • βœ… A2A Protocol Integration
  • βœ… MCP Protocol Support (stdio, SSE, HTTP)
  • βœ… OpenAPI Toolset (REST API auto-integration)
  • βœ… Built-in Tools (Google Search, Code Execution)
  • βœ… LLM Adapters (Gemini, OpenAI)
  • βœ… .NET 10.0 Migration

Phase 2 - Persistent Storage (In Progress)

  • ⏳ DatabaseSessionService - SQL persistence for sessions
    • PostgreSQL, MySQL, SQLite support
    • Production-ready session storage
    • Multi-instance deployment support
  • ⏳ FileArtifactService - Local file system storage
    • File-based artifact persistence
    • Blob storage support (Azure, AWS S3, GCP)
    • Artifact versioning and cleanup

Phase 3 - Advanced Features (Planned)

  • πŸ“‹ Planner System - Agent reasoning and planning
    • BuiltInPlanner with extended thinking
    • PlanReActPlanner for ReAct pattern
    • Custom planner support
  • πŸ“‹ FilesRetrieval - RAG and document search
    • Directory-based retrieval
    • Semantic search capabilities
    • LlamaIndex integration
  • πŸ“‹ AgentEvaluator - Testing and quality assurance
    • Automated agent testing
    • Benchmark framework
    • Performance metrics

Phase 4 - Cloud Integration (Future)

  • πŸ“‹ Cloud Storage Services
    • GCS Artifact Service (Google Cloud Storage)
    • Azure Blob Artifact Service
    • AWS S3 Artifact Service
  • πŸ“‹ Vertex AI Integration
    • Vertex AI RAG Retrieval
    • Vertex AI Search Tool
    • Vertex AI Example Store
  • πŸ“‹ Enterprise Features
    • Authentication & Authorization
    • Rate limiting and quotas
    • Audit logging and monitoring

Community Contributions Welcome! 🀝

We welcome contributions for:

  • New LLM adapters (Anthropic Claude, Cohere, etc.)
  • Additional built-in tools
  • Cloud provider integrations
  • Performance optimizations
  • Documentation improvements

See CONTRIBUTING.md for guidelines.

πŸ”– Version Update Checklist

When bumping version, update these files:

  1. README.md - Version field in Status section
  2. docs/CHANGELOG.md - Add new version entry
  3. docs/FEATURES.md - Version + Last Updated
  4. docs/STATUS.md - Version + Last Updated
  5. llms-full.txt - Version (line 12) + VERSION INFORMATION section
  6. src/NTG.Adk.*/NTG.Adk.*.csproj - All 4 package Version properties

πŸ“„ License

Apache 2.0 License - see LICENSE file.

πŸ™ Credits


Built with Abstract Driven Development (A.D.D) V3 πŸš€

About

A complete port of google/adk-python. Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors