Skip to content

Add stdio Transport Support for Desktop LLM Applications #8

@rockfordlhotka

Description

@rockfordlhotka

Add stdio Transport Support for Desktop LLM Applications

Problem Statement

The csla-mcp-server currently only exposes the MCP server via HTTP transport, which is ideal for containerized deployments but incompatible with many desktop LLM applications that only support MCP servers via stdio (standard input/output).

The underlying ModelContextProtocol NuGet package (v0.3.0-preview.4) that this project is built on supports both HTTP and stdio transports, but the current implementation only configures HTTP.

Use Case

Desktop LLM applications like Claude Desktop, Cursor, and others typically launch MCP servers as child processes and communicate via stdio. These applications cannot connect to HTTP-based MCP servers. Supporting stdio would enable users to run the CSLA MCP server locally in these tools without requiring Docker or a separate web server process.

Proposed Solution

Add a new command-line switch to enable stdio transport mode alongside the existing HTTP mode.

Implementation Approach

  1. Add a new command: Create a stdio command (or add a --transport option to the existing run command) that configures stdio transport instead of HTTP

  2. Modify Program.cs: Update the service configuration to support both transports

    Current code (HTTP only):

    var builder = WebApplication.CreateBuilder();
    builder.Services.AddMcpServer()
        .WithHttpTransport()
        .WithTools<CslaCodeTool>();

    Proposed code (with stdio support):

    // Option 1: Separate command
    public sealed class StdioCommand : Command<AppSettings>
    {
        public override int Execute(CommandContext context, AppSettings settings)
        {
            // Initialize code samples path (same logic as RunCommand)
            InitializeCodeSamplesPath(settings);
            
            // Initialize vector store (same logic as RunCommand)
            InitializeVectorStore();
            
            // Configure for stdio instead of HTTP
            var builder = Host.CreateApplicationBuilder();
            builder.Services.AddMcpServer()
                .WithStdioTransport()  // Use stdio instead of HTTP
                .WithTools<CslaCodeTool>();
            
            builder.Services.AddCsla();
            
            var app = builder.Build();
            app.Run();
            
            return 0;
        }
    }

    Or Option 2: Add transport option to existing command:

    public sealed class AppSettings : CommandSettings
    {
        [CommandOption("-f|--folder <FOLDER>")]
        public string? Folder { get; set; }
        
        [CommandOption("-t|--transport <TRANSPORT>")]
        [DefaultValue("http")]
        public string Transport { get; set; } = "http";
    }
  3. Update command registration:

    var app = new CommandApp();
    app.Configure(config =>
    {
        config.SetApplicationName("csla-mcp-server");
        config.AddCommand<RunCommand>("run");
        config.AddCommand<StdioCommand>("stdio");  // New command
    });
  4. Extract common initialization logic: Refactor the code samples path initialization and vector store setup into shared methods to avoid duplication between HTTP and stdio modes

CLI Usage Examples

Option 1 (separate command):

# HTTP mode (current behavior)
dotnet run --project csla-mcp-server -- run

# Stdio mode (new)
dotnet run --project csla-mcp-server -- stdio

# With custom folder
dotnet run --project csla-mcp-server -- stdio -f /path/to/examples

Option 2 (transport flag):

# HTTP mode (default)
dotnet run --project csla-mcp-server -- run --transport http

# Stdio mode
dotnet run --project csla-mcp-server -- run --transport stdio

# With custom folder
dotnet run --project csla-mcp-server -- run --transport stdio -f /path/to/examples

Desktop LLM Configuration Example

After implementation, users could configure desktop LLM apps like this:

Claude Desktop (claude_desktop_config.json):

{
  "mcpServers": {
    "csla": {
      "command": "dotnet",
      "args": [
        "run",
        "--project",
        "s:/src/rdl/csla-mcp/csla-mcp-server",
        "--",
        "stdio"
      ],
      "env": {
        "AZURE_OPENAI_ENDPOINT": "https://your-resource.openai.azure.com/",
        "AZURE_OPENAI_API_KEY": "your-api-key-here"
      }
    }
  }
}

Technical Considerations

  1. Shared initialization: The code samples path validation, vector store initialization, and embeddings loading logic should be extracted into shared methods used by both HTTP and stdio modes

  2. Logging: stdio mode should ensure all logging goes to stderr (not stdout) to avoid interfering with the MCP protocol communication on stdout

  3. Health checks: HTTP-specific features like health check endpoints (/health) would not be available in stdio mode

  4. OpenTelemetry: The OTLP exporter configuration may need to be adjusted or made optional for stdio mode

  5. Environment variables: Both modes should support the same environment variables:

    • CSLA_CODE_SAMPLES_PATH
    • AZURE_OPENAI_ENDPOINT
    • AZURE_OPENAI_API_KEY
    • AZURE_OPENAI_EMBEDDING_MODEL
    • AZURE_OPENAI_API_VERSION

Benefits

  • Enables use of CSLA MCP server in desktop LLM applications (Claude, Cursor, etc.)
  • No Docker or containerization required for local development
  • Simpler setup for individual developers
  • Broadens the potential user base

Alternative Considered

Creating a separate project/executable specifically for stdio mode. This was rejected because:

  • Would duplicate code and maintenance effort
  • The underlying ModelContextProtocol package already supports both transports
  • A simple command-line switch is more user-friendly

Related Files

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions