Skip to content

Troubleshooting

alexheifetz edited this page Sep 22, 2025 · 1 revision

Troubleshooting Guide

Common issues and solutions for Embabel Agent Framework development and deployment.

🚨 Quick Fixes for Common Problems

Issue: Agent Not Starting

# Check if your main class has the right annotation
@SpringBootApplication
@EnableAgents  // Make sure this is present
class MyAgentApplication

Issue: No Models Available

  • Local models: Install Ollama and pull a model
  • Cloud models: Set API key environment variables (optional)
  • Check logs: Look for model discovery messages

Issue: Property Configuration Not Working

  • Check property names: Use new namespace (embabel.agent.* not embabel.agent-platform.*)
  • Restart application: Properties are loaded at startup
  • Use environment variables: EMBABEL_AGENT_LOGGING_PERSONALITY=starwars

🔧 Local Model Issues

Ollama Timeout Problems

Symptoms:

ReadTimeoutException: null
LLM invocation rank-Agents: Retry attempt 1 of 10 due to: I/O error on POST request

Root Cause: Ollama models running on CPU can be slow, causing HTTP client timeouts.

Solutions:

Option 1: Increase Timeout Settings

# application.properties
embabel.agent.platform.llm-operations.data-binding.fixed-backoff-millis=60000
embabel.agent.platform.ranking.backoff-millis=60000
embabel.agent.platform.ranking.backoff-max-interval=180000

# Also increase HTTP client timeout
spring.ai.ollama.chat.options.timeout=120s

Option 2: Use Lighter Models

# Instead of large models, use smaller ones
ollama pull llama3.2:1b          # 1B parameters - much faster
ollama pull phi3:mini            # Microsoft's efficient model
ollama pull gemma2:2b           # Google's compact model

Option 3: Optimize Ollama Settings

# Set Ollama environment variables for better performance
export OLLAMA_NUM_PARALLEL=1     # Limit concurrent requests
export OLLAMA_MAX_LOADED_MODELS=1  # Keep only one model loaded
export OLLAMA_HOST=127.0.0.1:11434 # Ensure correct host

Ollama Model Not Discovered

Symptoms:

No Ollama models discovered. Check Ollama server configuration.

Solutions:

  1. Verify Ollama is Running:
# Check if Ollama is running
curl http://localhost:11434/api/tags

# Should return JSON with your models
# If not, start Ollama:
ollama serve
  1. Check Model Installation:
# List installed models
ollama list

# Pull a model if none exist
ollama pull llama3.2:1b
  1. Verify Base URL Configuration:
# application.properties
spring.ai.ollama.base-url=http://localhost:11434

Model Selection Issues

Problem: Framework not using the model you want.

Solution: Explicitly specify model in actions:

@Action
fun processWithSpecificModel(input: String, context: OperationContext): Result {
    return context.ai()
        .withLlm(LlmOptions.withModel("llama3.2:1b"))  // Specify exact model
        .createObject("Process: $input")
}

☁️ Cloud Model Issues

API Key Not Recognized

Symptoms:

OpenAI models not available
No valid API key found

Solutions:

  1. Verify Environment Variable:
# Check if set
echo $OPENAI_API_KEY

# Set properly (no quotes around value)
export OPENAI_API_KEY=sk-your-actual-key-here
export ANTHROPIC_API_KEY=sk-ant-your-key-here
  1. Restart Application: Environment variables are read at startup - restart after setting them.

  2. Check Dependency:

<!-- Make sure you have the right starter -->
<dependency>
    <groupId>com.embabel.agent</groupId>
    <artifactId>embabel-agent-starter-openai</artifactId>
    <version>${embabel-agent.version}</version>
</dependency>

OpenAI-Compatible API Setup

Problem: Want to use other providers (Azure OpenAI, Groq, etc.)

Solution:

# application.properties
OPENAI_BASE_URL=https://your-provider-endpoint.com/v1
OPENAI_API_KEY=your-provider-api-key
OPENAI_COMPLETIONS_PATH=/custom/completions  # If needed

Supported OpenAI-Compatible Providers:

  • Azure OpenAI Service
  • Groq
  • Together AI
  • DeepSeek
  • Local OpenAI-compatible servers

🏗️ Configuration Issues

Properties Not Being Applied

Common Property Name Issues:

❌ Old/Wrong ✅ Correct
embabel.agent-platform.* embabel.agent.platform.*
embabel.autonomy.* embabel.agent.platform.autonomy.*
embabel.llm-operations.* embabel.agent.platform.llm-operations.*

Migration Detection:

# Enable migration warnings to find deprecated usage
export EMBABEL_AGENT_PLATFORM_MIGRATION_SCANNING_ENABLED=true

IDE Auto-completion Not Working

Problem: No auto-completion for Embabel properties in IntelliJ/VS Code.

Solution: Add Spring Boot configuration processor:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-configuration-processor</artifactId>
    <optional>true</optional>
</dependency>

Then restart your IDE after rebuilding the project.

Profile Migration Issues

Symptoms:

DEPRECATED: Profile 'starwars' is deprecated
DEPRECATED: Profile 'neo' is deprecated

Migration Guide:

Old Profile New Property
spring.profiles.active=starwars embabel.agent.logging.personality=starwars
spring.profiles.active=shell embabel.shell.enabled=true
spring.profiles.active=neo embabel.agent.infrastructure.neo4j.enabled=true

🤖 Agent Execution Issues

Multi-Goal Agent Problems

Symptoms:

  • Agent stops at intermediate goals randomly
  • Final goal not always achieved
  • Execution varies between restarts

Root Cause: Multiple @AchievesGoal annotations in single agent causing planning confusion.

Solutions:

Option 1: Single Goal Per Agent (Recommended)

@Agent
class OrderProcessingAgent {
    
    @Action
    fun validateOrder(): OrderValidation { /* ... */ }
    
    @Action  
    fun processPayment(): PaymentResult { /* ... */ }
    
    @AchievesGoal("Complete order processing")  // Single goal only
    @Action
    fun completeOrder(): ProcessedOrder { /* ... */ }
}

Option 2: Explicit Result Type

// When invoking, specify the exact return type you want
val invocation = AgentInvocation.create(agentPlatform, FinalResult::class.java)
val result = invocation.invokeAsync(request).get()

Option 3: Separate Agents

@Agent class ValidationAgent {
    @AchievesGoal("Validate order") 
    @Action fun validate(): OrderValidation
}

@Agent class ProcessingAgent {
    @AchievesGoal("Process complete order")
    @Action fun process(): ProcessedOrder
}

Agent Not Found/Selected

Symptoms:

No appropriate agent found for user input
Agent ranking returned no suitable candidates

Solutions:

  1. Improve Agent Descriptions:
@Agent(description = "Processes customer orders including validation, payment, and fulfillment")
class OrderProcessingAgent {
    
    @Action(description = "Validates customer order data and checks for issues")
    fun validateOrder(): OrderValidation
}
  1. Check Agent Registration:
// Make sure agents are in scanned packages
@SpringBootApplication
@ComponentScan(basePackages = ["com.mycompany.agents"])  // Include agent packages
class MyApplication
  1. Use Explicit Agent Invocation:
// Direct agent usage instead of autonomous selection
val result = agentPlatform.run(OrderProcessingAgent::class, request)

🔗 Integration Issues

MCP Tools Not Available

Symptoms:

Tool group WEB not available
No MCP servers configured

Solutions:

  1. Enable Docker Desktop MCP:
@EnableAgents(mcpServers = [McpServers.DOCKER_DESKTOP])
class MyApplication
  1. Verify Docker Desktop Setup:
  • Install Docker Desktop with MCP extension
  • Enable these tools: Brave Search, Fetch, Puppeteer, Wikipedia
  1. Manual MCP Configuration:
# application.yml
embabel:
  agent:
    infrastructure:
      mcp:
        enabled: true
        servers:
          brave-search:
            command: docker
            args: ["run", "-i", "--rm", "-e", "BRAVE_API_KEY", "mcp/brave-search"]
            env:
              BRAVE_API_KEY: ${BRAVE_API_KEY:}

Neo4j Connection Issues

Symptoms:

Failed to connect to Neo4j
Authentication failed

Solutions:

  1. Enable Neo4j Integration:
embabel.agent.infrastructure.neo4j.enabled=true
embabel.agent.infrastructure.neo4j.uri=bolt://localhost:7687
  1. Set Required Credentials:
# Never hardcode these!
export NEO4J_USERNAME=your_username
export NEO4J_PASSWORD=your_password
  1. Verify Neo4j is Running:
# Check Neo4j status
docker run --name neo4j -p 7474:7474 -p 7687:7687 neo4j:latest

🧪 Testing Issues

Test Configuration Problems

Common Test Setup:

@SpringBootTest
@TestPropertySource(properties = [
    "embabel.agent.platform.test.mockMode=true",  // Use mocks
    "embabel.agent.logging.personality=default"    // Clean output
])
class AgentTest {
    
    @Autowired
    private lateinit var agentPlatform: AgentPlatform
    
    @Test
    fun `should process request correctly`() {
        val context = FakeOperationContext()
        context.expectResponse(ExpectedResult("test"))
        
        // Your test logic
    }
}

Mock LLM Responses

Problem: Tests calling real LLM APIs.

Solution: Use FakeOperationContext:

@Test
fun `should generate correct prompt`() {
    val context = FakeOperationContext()
    context.expectResponse(MyExpectedResponse("result"))
    
    val result = myAgent.processRequest("input", context)
    
    // Verify the prompt sent to LLM
    val prompt = context.llmInvocations.first().prompt
    assertThat(prompt).contains("expected content")
}

📊 Performance Issues

Slow Agent Execution

Common Causes & Solutions:

  1. Large Model Usage:
// Use smaller models for simple tasks
@Action
fun quickTask(input: String, context: OperationContext): Result {
    return context.ai()
        .withLlm(LlmOptions.withModel("llama3.2:1b"))  // Faster model
        .createObject("Quick processing: $input")
}
  1. Excessive Retries:
# Reduce retry attempts for development
embabel.agent.platform.llm-operations.data-binding.max-attempts=3
embabel.agent.platform.ranking.max-attempts=3
  1. Tool Usage Overhead:
// Only request tools when needed
@Action(toolGroups = [ToolGroup.WEB])  // Only when web access needed
fun researchTask(): ResearchResult

Memory Usage Issues

Solutions:

  1. Limit Concurrent Model Loading:
# Ollama settings
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_PARALLEL=1
  1. Use Streaming for Large Responses:
// Future feature - streaming responses
@Action
fun generateLargeContent(): Flux<String> {
    // Streaming implementation
}

🔍 Debugging Tips

Enable Debug Logging

# application.properties
logging.level.com.embabel=DEBUG
logging.level.com.embabel.agent.spi=DEBUG
logging.level.com.embabel.agent.config=DEBUG

# For Spring AI integration
logging.level.org.springframework.ai=DEBUG

Verbose Shell Commands

# In Embabel shell
execute "your request" -p -r -v
# -p = show prompts
# -r = show responses  
# -v = verbose logging

Check Auto-Configuration

# See what auto-configuration happened
java -jar app.jar --debug

# Check beans
curl http://localhost:8080/actuator/beans | grep -i embabel

Monitor LLM Calls

@Component
class LlmCallListener {
    
    @EventListener
    fun onLlmCall(event: LlmInvocationEvent) {
        logger.info("LLM Call: model=${event.model}, prompt=${event.prompt.length} chars")
    }
}

🆘 Getting Help

Before Asking for Help

  1. Check this troubleshooting guide first
  2. Enable debug logging and include relevant logs
  3. Try with a minimal example to isolate the issue
  4. Verify your configuration against working examples

Where to Get Help

Creating Good Issue Reports

Include:

  • Embabel version
  • Java version
  • Complete error logs (with debug enabled)
  • Minimal reproducible example
  • Configuration files (remove API keys!)
  • Steps you've already tried

Template:

## Environment
- Embabel Version: 0.1.3-SNAPSHOT
- Java Version: 21
- OS: Ubuntu 22.04

## Issue Description
Brief description of the problem...

## Steps to Reproduce
1. Configure application with...
2. Run agent with...
3. Error occurs...

## Expected Behavior
What should happen...

## Actual Behavior  
What actually happens...

## Logs

[Include relevant logs with debug enabled]


## Configuration
```yaml
# Your application.yml (remove API keys)

This troubleshooting guide addresses the most common issues I found in the GitHub issues data, with practical solutions and clear debugging steps.
Clone this wiki locally