Skip to content

Conversation

@ericcurtin
Copy link
Contributor

@ericcurtin ericcurtin commented Oct 14, 2025

  • Add detection for NVIDIA NIM images (nvcr.io/nim/ prefix)
  • Create NIM-specific container lifecycle management
  • Configure NIM containers with GPU support, shared memory, and NGC API key
  • Proxy chat requests to NIM container's OpenAI-compatible API
  • Add tests for NIM detection and container naming
  • Support both single prompt and interactive chat modes for NIM
  • Add comprehensive NIM support documentation to README
  • Improve user feedback for NIM initialization status
  • Add GPU detection status messages
  • Improve timeout error message with troubleshooting tip

Summary by Sourcery

Add full NVIDIA NIM support to Docker Model Runner, enabling detection and lifecycle management of NIM containers with GPU, memory, and API key configuration, and proxying chat requests to their OpenAI-compatible API.

New Features:

  • Detect and run NVIDIA NIM images as Docker containers, including pull, creation, start, and readiness checks
  • Proxy chat interactions through the NIM container’s OpenAI-compatible API in both single-prompt and interactive modes
  • Configure NIM containers with GPU acceleration, 16GB shared memory, shared caching, and optional NGC API key authentication

Enhancements:

  • Display GPU detection status messages and enhance timeout error with troubleshooting tip during NIM initialization

Documentation:

  • Add NVIDIA NIM support section to README with prerequisites, quick start, configuration, and usage examples

Tests:

  • Add unit tests for NIM image detection and container name generation

- Add detection for NVIDIA NIM images (nvcr.io/nim/ prefix)
- Create NIM-specific container lifecycle management
- Configure NIM containers with GPU support, shared memory, and NGC API key
- Proxy chat requests to NIM container's OpenAI-compatible API
- Add tests for NIM detection and container naming
- Support both single prompt and interactive chat modes for NIM
- Add comprehensive NIM support documentation to README
- Improve user feedback for NIM initialization status
- Add GPU detection status messages
- Improve timeout error message with troubleshooting tip

Signed-off-by: Eric Curtin <[email protected]>
Copilot AI review requested due to automatic review settings October 14, 2025 11:33
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 14, 2025

Reviewer's Guide

Extends the Docker Model Runner CLI to detect NVIDIA NIM images and manage their full container lifecycle—including pull, creation, GPU and cache configuration, readiness checks, and OpenAI-compatible chat proxying—while adding tests and comprehensive documentation.

Sequence diagram for NIM container lifecycle management

sequenceDiagram
    participant User as actor User
    participant CLI as Docker Model Runner CLI
    participant Docker as Docker Engine
    participant NIM as NIM Container
    participant API as OpenAI-compatible API

    User->>CLI: Run command with NIM image
    CLI->>CLI: isNIMImage(model)
    CLI->>Docker: Check for existing NIM container
    alt Container exists and running
        CLI->>Docker: Use existing container
    else Container exists but not running
        CLI->>Docker: Start container
    else No container exists
        CLI->>Docker: Pull NIM image
        CLI->>Docker: Create and start container
    end
    CLI->>NIM: Wait for readiness (poll /v1/models)
    NIM-->>CLI: Ready
    User->>CLI: Provide prompt or start chat
    CLI->>API: Proxy chat request to NIM OpenAI API
    API-->>CLI: Stream response
    CLI-->>User: Display response
Loading

Class diagram for NIM container management types

classDiagram
    class NIMManager {
        +isNIMImage(model string) bool
        +nimContainerName(model string) string
        +pullNIMImage(ctx, dockerClient, model, cmd) error
        +findNIMContainer(ctx, dockerClient, model) (string, error)
        +createNIMContainer(ctx, dockerClient, model, cmd) (string, error)
        +waitForNIMReady(ctx, cmd) error
        +runNIMModel(ctx, dockerClient, model, cmd) error
        +chatWithNIM(cmd, model, prompt) error
    }
    class gpupkg {
        +ProbeGPUSupport(ctx, dockerClient) (GPUSupport, error)
        +HasNVIDIARuntime(ctx, dockerClient) (bool, error)
        GPUSupportNone
        GPUSupportCUDA
    }
    NIMManager -- gpupkg: uses
Loading

File-Level Changes

Change Details Files
Add NIM image detection and CLI command branching
  • Added isNIMImage check in run.go to route NIM images to specialized handling
  • Integrated single-prompt and interactive chat loops within newRunCmd
cmd/cli/commands/run.go
Implement NIM-specific container lifecycle management and chat proxy
  • Created runNIMModel with pull, create, start, and readiness check functions
  • Configured GPU support, shared memory, cache mounts, and NGC API key handling
  • Added chatWithNIM to forward user prompts to the NIM container’s OpenAI-compatible API
cmd/cli/commands/nim.go
Add NVIDIA NIM support documentation to README
  • Added prerequisites, quick start guide, features, and technical details for NIM support
README.md
Add tests for NIM image detection and container naming
  • Implemented TestIsNIMImage cases for detection logic
  • Added TestNIMContainerName to verify container naming conventions
cmd/cli/commands/nim_test.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the Docker Model Runner's capabilities by introducing native support for NVIDIA Inference Microservices (NIM). It streamlines the process of deploying and interacting with NVIDIA's optimized AI models by automatically managing NIM container lifecycles, configuring GPU resources, and providing a seamless chat interface. This integration aims to simplify the user experience for leveraging high-performance inference models.

Highlights

  • NVIDIA NIM Integration: Adds full support for running NVIDIA Inference Microservices (NIM) containers directly within the Docker Model Runner, simplifying the deployment of NVIDIA's optimized inference models.
  • Automated Container Management: Implements robust logic for detecting NIM images, pulling them, creating, starting, and reusing containers, along with automatic configuration for GPU support, shared memory, and NGC API key handling.
  • Interactive and Single Prompt Chat: Enables seamless interaction with NIM models through both interactive chat sessions and single-prompt execution, leveraging their OpenAI-compatible API.
  • Enhanced User Feedback: Provides improved user feedback during NIM initialization, clearer GPU detection status messages, and more helpful troubleshooting tips for timeout errors.
  • Comprehensive Documentation: Updates the README.md with a dedicated and detailed section for NVIDIA NIM support, covering prerequisites, quick start instructions, features, example usage, configuration options, and technical details.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Consider making nimDefaultPort, nimDefaultShmSize and cache directory configurable via CLI flags or environment variables rather than hardcoding values to allow greater flexibility.
  • The SSE parsing in chatWithNIM relies on manual string operations—consider using a proper JSON streaming decoder or SSE client library to handle edge cases and keep the code more robust.
  • The newRunCmd interactive loop and runNIMModel setup share overlapping logic—factoring out common behaviors could reduce duplication and make future maintenance easier.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider making nimDefaultPort, nimDefaultShmSize and cache directory configurable via CLI flags or environment variables rather than hardcoding values to allow greater flexibility.
- The SSE parsing in chatWithNIM relies on manual string operations—consider using a proper JSON streaming decoder or SSE client library to handle edge cases and keep the code more robust.
- The newRunCmd interactive loop and runNIMModel setup share overlapping logic—factoring out common behaviors could reduce duplication and make future maintenance easier.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds comprehensive NVIDIA NIM (NVIDIA Inference Microservices) support to Docker Model Runner, enabling users to run NVIDIA's optimized inference containers directly through the existing CLI interface.

  • Automatic detection of NIM images based on the nvcr.io/nim/ registry prefix
  • Complete container lifecycle management with GPU support, shared memory configuration, and NGC API key handling
  • OpenAI-compatible API integration for both single prompt and interactive chat modes

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
cmd/cli/commands/run.go Adds NIM image detection and routing logic to the main run command
cmd/cli/commands/nim.go Implements core NIM functionality including container management and chat API integration
cmd/cli/commands/nim_test.go Provides unit tests for NIM image detection and container naming functions
README.md Documents NIM support with setup instructions, usage examples, and technical details

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +326 to +344
// Parse the JSON and extract the content
// For simplicity, we'll use basic string parsing
// In production, we'd use proper JSON parsing
if strings.Contains(data, `"content"`) {
// Extract content field - simple approach
contentStart := strings.Index(data, `"content":"`)
if contentStart != -1 {
contentStart += len(`"content":"`)
contentEnd := strings.Index(data[contentStart:], `"`)
if contentEnd != -1 {
content := data[contentStart : contentStart+contentEnd]
// Unescape basic JSON escapes
content = strings.ReplaceAll(content, `\n`, "\n")
content = strings.ReplaceAll(content, `\t`, "\t")
content = strings.ReplaceAll(content, `\"`, `"`)
cmd.Print(content)
}
}
}
Copy link

Copilot AI Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The manual JSON parsing approach is fragile and may fail with complex content that contains escaped quotes or nested JSON structures. This could result in incorrect content extraction or parsing failures. Use proper JSON unmarshaling with a struct to handle the SSE response data safely.

Copilot uses AI. Check for mistakes.
}
}

return fmt.Errorf("NIM failed to become ready within timeout. Check container logs with: docker logs $(docker ps -q --filter name=docker-model-nim-)")
Copy link

Copilot AI Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message uses a complex shell command that may not work correctly in all environments. The docker ps -q --filter name=docker-model-nim- command could return multiple container IDs if multiple NIM containers exist. Consider providing a more specific container name or a simpler troubleshooting command.

Copilot uses AI. Check for mistakes.
Comment on lines +338 to +340
content = strings.ReplaceAll(content, `\n`, "\n")
content = strings.ReplaceAll(content, `\t`, "\t")
content = strings.ReplaceAll(content, `\"`, `"`)
Copy link

Copilot AI Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSON unescaping is incomplete and may not handle all valid JSON escape sequences. Missing handling for \\ (backslash), \/ (forward slash), \b (backspace), \f (form feed), \r (carriage return), and Unicode escapes (\uXXXX). This could result in incorrectly displayed content.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality by adding support for NVIDIA NIM containers. The implementation is comprehensive, covering container lifecycle management, GPU configuration, and proxying chat requests. The code is generally well-structured, and the inclusion of documentation and tests is commendable. I have identified a few areas for improvement, primarily concerning robustness, maintainability, and correctness. Specifically, there are some issues with JSON handling and container name generation that should be addressed. I've also suggested refactoring to reduce code duplication. Overall, this is a solid contribution that will be even better with a few adjustments.

Comment on lines +40 to +52
func nimContainerName(model string) string {
// Extract the model name from the reference (e.g., nvcr.io/nim/google/gemma-3-1b-it:latest -> google-gemma-3-1b-it)
parts := strings.Split(strings.TrimPrefix(model, nimPrefix), "/")
name := strings.Join(parts, "-")
// Remove tag if present
if idx := strings.Index(name, ":"); idx != -1 {
name = name[:idx]
}
// Replace any remaining special characters
name = strings.ReplaceAll(name, ":", "-")
name = strings.ReplaceAll(name, "/", "-")
return nimContainerPrefix + name
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for generating a container name is not robust. It can produce incorrect names for models that have a colon in their path segments (e.g., nvcr.io/nim/a:b/c:latest would incorrectly result in a container name based on a). The order of operations should be changed to first sanitize the path by replacing slashes, and then remove the tag. Using strings.LastIndex is also safer for finding the tag separator.

The current implementation also has redundant ReplaceAll calls.

func nimContainerName(model string) string {
	// Extract the model name from the reference (e.g., nvcr.io/nim/google/gemma-3-1b-it:latest -> google-gemma-3-1b-it)
	name := strings.TrimPrefix(model, nimPrefix)
	// Replace path separators with dashes
	name = strings.ReplaceAll(name, "/", "-")
	// Remove tag, which is separated by the last colon.
	if idx := strings.LastIndex(name, ":"); idx != -1 {
		name = name[:idx]
	}
	return nimContainerPrefix + name
}

Comment on lines +286 to +292
reqBody := fmt.Sprintf(`{
"model": "%s",
"messages": [
{"role": "user", "content": %q}
],
"stream": true
}`, modelName, prompt)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using fmt.Sprintf to construct a JSON string is not robust and can be error-prone, especially when dealing with user input that might contain special characters. The %q format verb is for Go string literals and doesn't guarantee a valid JSON string for all inputs. It's much safer and more idiomatic to define a struct for the request body and use json.Marshal.

You will need to import the bytes and encoding/json packages.

	type chatMessage struct {
		Role    string `json:"role"`
		Content string `json:"content"`
	}
	type chatRequest struct {
		Model    string        `json:"model"`
		Messages []chatMessage `json:"messages"`
		Stream   bool          `json:"stream"`
	}
	payload := chatRequest{
		Model: modelName,
		Messages: []chatMessage{
			{Role: "user", Content: prompt},
		},
		Stream: true,
	}
	reqBodyBytes, err := json.Marshal(payload)
	if err != nil {
		return fmt.Errorf("failed to marshal request body: %w", err)
	}

Comment on lines +326 to +344
// Parse the JSON and extract the content
// For simplicity, we'll use basic string parsing
// In production, we'd use proper JSON parsing
if strings.Contains(data, `"content"`) {
// Extract content field - simple approach
contentStart := strings.Index(data, `"content":"`)
if contentStart != -1 {
contentStart += len(`"content":"`)
contentEnd := strings.Index(data[contentStart:], `"`)
if contentEnd != -1 {
content := data[contentStart : contentStart+contentEnd]
// Unescape basic JSON escapes
content = strings.ReplaceAll(content, `\n`, "\n")
content = strings.ReplaceAll(content, `\t`, "\t")
content = strings.ReplaceAll(content, `\"`, `"`)
cmd.Print(content)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Parsing JSON with string manipulation is fragile and error-prone. The comment acknowledges this is for simplicity, but it should be replaced with proper JSON parsing for robustness. The current implementation only unescapes a few characters (\n, \t, \") and will fail on others. Using json.Unmarshal into a struct is the correct and safer approach.

			// Parse the JSON and extract the content
			var streamResp struct {
				Choices []struct {
					Delta struct {
						Content string `json:"content"`
					} `json:"delta"`
				} `json:"choices"`
			}
			if err := json.Unmarshal([]byte(data), &streamResp); err == nil {
				if len(streamResp.Choices) > 0 {
					cmd.Print(streamResp.Choices[0].Delta.Content)
				}
			}

if err != nil {
return "", fmt.Errorf("failed to get home directory: %w", err)
}
cacheDir = homeDir + "/.cache/nim"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using + to concatenate path components is not platform-agnostic and can lead to issues on operating systems like Windows. It's better to use filepath.Join to construct file paths. You will need to import the path/filepath package.

		cacheDir = filepath.Join(homeDir, ".cache", "nim")

Comment on lines +263 to +265
if err := waitForNIMReady(ctx, cmd); err != nil {
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To help users debug issues, it's useful to wrap the error from waitForNIMReady with more specific context, including the container name and a hint on how to check its logs. The current error message in waitForNIMReady contains a shell command that is not robust and may fail if multiple NIM containers are running. It would be better to simplify the error in waitForNIMReady to just report the timeout, and add the specific debugging instructions here in runNIMModel.

	if err := waitForNIMReady(ctx, cmd); err != nil {
		containerName := nimContainerName(model)
		return fmt.Errorf("failed to wait for NIM container %q to be ready: %w. You can check its logs with: 'docker logs %s'", containerName, err, containerName)
	}

Comment on lines +381 to +413
// If no prompt provided, enter interactive mode
if prompt == "" {
scanner := bufio.NewScanner(os.Stdin)
cmd.Println("Interactive chat mode started. Type '/bye' to exit.")

for {
userInput, err := readMultilineInput(cmd, scanner)
if err != nil {
if err.Error() == "EOF" {
cmd.Println("\nChat session ended.")
break
}
return fmt.Errorf("Error reading input: %v", err)
}

if strings.ToLower(strings.TrimSpace(userInput)) == "/bye" {
cmd.Println("Chat session ended.")
break
}

if strings.TrimSpace(userInput) == "" {
continue
}

if err := chatWithNIM(cmd, model, userInput); err != nil {
cmd.PrintErr(fmt.Errorf("failed to chat with NIM: %w", err))
continue
}

cmd.Println()
}
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This interactive chat loop is very similar to the one for non-NIM models later in the file (lines 452-477). This code duplication can be avoided by refactoring the loop into a helper function that takes a chat function as an argument. This would improve maintainability and reduce the chance of bugs when one loop is updated but the other is not.

Additionally, checking for EOF by string comparison (err.Error() == "EOF") is not robust. It's better to use errors.Is(err, io.EOF). This would require readMultilineInput to return io.EOF directly instead of a formatted error string.

@ericcurtin ericcurtin marked this pull request as draft October 14, 2025 12:06
@ericcurtin
Copy link
Contributor Author

This will need a contributor with Nvidia hardware to own this

@ericcurtin ericcurtin mentioned this pull request Oct 14, 2025
@ericcurtin
Copy link
Contributor Author

Plenty of the AI bot comments here make sense to address

Count: -1,
Capabilities: [][]string{{"gpu"}},
}}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: If someone is running NIM, they likely want NVIDIA GPU support. Maybe we should warn the user if we could not detect one?

@ericcurtin ericcurtin closed this pull request by merging all changes into main in 5b56415 Oct 22, 2025
@ericcurtin ericcurtin deleted the add-nim-support branch October 22, 2025 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants