Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions agentops/instrumentation/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Contributing to AgentOps Instrumentation

## Adding a New Instrumentor

### 1. Determine the Category

- **Providers**: LLM API providers (OpenAI, Anthropic, etc.)
- **Frameworks**: Agent frameworks (CrewAI, AutoGen, etc.)
- **Utilities**: Infrastructure/utility modules (threading, logging, etc.)

### 2. Create Module Structure

```
category_name/
└── your_module/
├── __init__.py
├── instrumentor.py # Main instrumentor class
├── attributes/ # Attribute extraction functions
│ ├── __init__.py
│ └── common.py
└── stream_wrapper.py # If streaming is supported
```

### 3. Implement the Instrumentor

```python
from agentops.instrumentation.common.base_instrumentor import AgentOpsBaseInstrumentor
from agentops.instrumentation.common.wrappers import WrapConfig

class YourInstrumentor(AgentOpsBaseInstrumentor):
def instrumentation_dependencies(self):
return ["your-package >= 1.0.0"]

def _init_wrapped_methods(self):
return [
WrapConfig(
trace_name="your_module.operation",
package="your_package.module",
class_name="YourClass",
method_name="method",
handler=your_attribute_handler,
),
]
```

### 4. Implement Attribute Handlers

```python
# In attributes/common.py
def your_attribute_handler(args, kwargs, return_value=None):
attributes = {}
# Extract relevant attributes
return attributes
```

### 5. Add Streaming Support (if applicable)

```python
from agentops.instrumentation.common.streaming import StreamingResponseWrapper

class YourStreamWrapper(StreamingResponseWrapper):
def _process_chunk(self, chunk):
# Process streaming chunks
pass
```

### 6. Write Tests

Add tests in `tests/instrumentation/test_your_module.py`

### 7. Update Documentation

- Add your module to the main README.md
- Create a README.md in your module directory
- Document any special features or requirements

## Code Standards

- Use type hints
- Follow PEP 8
- Add docstrings to all public methods
- Handle errors gracefully
- Log at appropriate levels

## Testing

Run tests before submitting:
```bash
pytest tests/instrumentation/test_your_module.py
```

## Submitting

1. Create a feature branch
2. Make your changes
3. Add tests
4. Update documentation
5. Submit a pull request
186 changes: 171 additions & 15 deletions agentops/instrumentation/README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,188 @@
# AgentOps Instrumentation

This package provides OpenTelemetry instrumentation for various LLM providers and related services.
This directory contains OpenTelemetry-based instrumentation for various LLM providers, agent frameworks, and utilities.

## Available Instrumentors
## Directory Structure

- OpenAI (`v0.27.0+` and `v1.0.0+`)
```
instrumentation/
├── common/ # Shared modules for all instrumentors
│ ├── base_instrumentor.py # Base class with common functionality
│ ├── config.py # Shared configuration
│ ├── streaming.py # Base streaming wrapper
│ ├── metrics.py # Metrics management
│ ├── wrappers.py # Method wrapping utilities
│ └── attributes.py # Common attribute extractors
├── providers/ # LLM Provider Instrumentors
│ ├── openai/ # OpenAI API
│ ├── anthropic/ # Anthropic Claude
│ ├── google_genai/ # Google Generative AI
│ └── ibm_watsonx_ai/ # IBM watsonx.ai
├── frameworks/ # Agent Framework Instrumentors
│ ├── ag2/ # AG2 (AutoGen)
│ ├── agno/ # Agno
│ ├── crewai/ # CrewAI
│ ├── openai_agents/ # OpenAI Agents SDK
│ └── smolagents/ # SmoLAgents
└── utilities/ # Utility Instrumentors
└── concurrent_futures/ # Thread pool context propagation
```

## Quick Start

### Using an Instrumentor

```python
from agentops import AgentOps

# Initialize AgentOps with automatic instrumentation
agentops = AgentOps(api_key="your-api-key")

# Or manually instrument specific libraries
from agentops.instrumentation.providers.openai import OpenAIInstrumentor

instrumentor = OpenAIInstrumentor()
instrumentor.instrument()
```

### Common Module Usage

All instrumentors inherit from `AgentOpsBaseInstrumentor` which provides:

## Usage
- Automatic tracer and meter initialization
- Standard metric creation
- Method wrapping/unwrapping infrastructure
- Error handling and logging

### OpenAI Instrumentation
Example implementation:

```python
from opentelemetry.instrumentation.openai import OpenAIInstrumentor
from agentops.instrumentation.common.base_instrumentor import AgentOpsBaseInstrumentor
from agentops.instrumentation.common.wrappers import WrapConfig

class MyInstrumentor(AgentOpsBaseInstrumentor):
def instrumentation_dependencies(self):
return ["my-package >= 1.0.0"]

def _init_wrapped_methods(self):
return [
WrapConfig(
trace_name="my_service.operation",
package="my_package.module",
class_name="MyClass",
method_name="my_method",
handler=self._get_attributes,
),
]

def _get_attributes(self, args, kwargs, return_value=None):
"""Extract attributes from method arguments and return value."""
return {
"my.attribute": kwargs.get("param", "default"),
# Add more attributes as needed
}
```

### Streaming Support

For providers with streaming responses, use the common `StreamingResponseWrapper`:

```python
from agentops.instrumentation.common.streaming import StreamingResponseWrapper

class MyStreamWrapper(StreamingResponseWrapper):
def _process_chunk(self, chunk):
"""Process individual streaming chunks."""
# Extract content from chunk
content = chunk.get("content", "")

# Accumulate for span attributes
self._accumulated_content.append(content)

# Return processed chunk
return chunk
```

### Metrics

Common metrics are automatically initialized:

from agentops.telemetry import get_tracer_provider()
- `llm.operation.duration` - Operation duration histogram
- `llm.token.usage` - Token usage histogram
- `llm.completions.exceptions` - Exception counter

# Initialize and instrument
instrumentor = OpenAIInstrumentor(
enrich_assistant=True, # Include assistant messages in spans
enrich_token_usage=True, # Include token usage in spans
enable_trace_context_propagation=True, # Enable trace context propagation
)
instrumentor.instrument(tracer_provider=tracer_provider) # <-- Uses the global AgentOps TracerProvider
Access metrics through the `MetricsManager`:

```python
from agentops.instrumentation.common.metrics import MetricsManager

# In your instrumentor
metrics = MetricsManager.init_metrics(meter, prefix="my_provider")
```

## Module Categories

### Providers

LLM API provider instrumentors capture:
- Model parameters (temperature, max_tokens, etc.)
- Request/response content
- Token usage
- Streaming responses
- Tool/function calls

### Frameworks

Agent framework instrumentors capture:
- Agent initialization and configuration
- Agent-to-agent communication
- Tool usage
- Workflow execution
- Team/crew coordination

### Utilities

Infrastructure instrumentors provide:
- Context propagation across threads
- Performance monitoring
- System resource tracking

## Best Practices

1. **Use the Common Base Class**: Inherit from `AgentOpsBaseInstrumentor` for consistency
2. **Separate Attribute Logic**: Keep attribute extraction in separate functions or modules
3. **Handle Errors Gracefully**: Always fall back to original behavior on errors
4. **Log Appropriately**: Use debug logging for instrumentation details
5. **Test Thoroughly**: Include unit tests for all wrapped methods

## Adding New Instrumentors

See [CONTRIBUTING.md](./CONTRIBUTING.md) for detailed guidelines on adding new instrumentors.

## Semantic Conventions

All instrumentors follow OpenTelemetry semantic conventions. See [agentops/semconv](../semconv/README.md) for available attributes.

## Troubleshooting

### Debug Logging

Enable debug logging to see instrumentation details:

```python
import logging
logging.getLogger("agentops").setLevel(logging.DEBUG)
```

> To add custom instrumentation, please do so in the `third_party/opentelemetry` directory.
### Common Issues

1. **Import Errors**: Ensure the target library is installed
2. **Method Not Found**: Check if the method signature has changed
3. **Context Loss**: For async/threading, ensure proper context propagation

## License

See individual module directories for specific license information.
34 changes: 19 additions & 15 deletions agentops/instrumentation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,38 +47,32 @@ class InstrumentorConfig(TypedDict):
# Configuration for supported LLM providers
PROVIDERS: dict[str, InstrumentorConfig] = {
"openai": {
"module_name": "agentops.instrumentation.openai",
"module_name": "agentops.instrumentation.providers.openai",
"class_name": "OpenAIInstrumentor",
"min_version": "1.0.0",
},
"anthropic": {
"module_name": "agentops.instrumentation.anthropic",
"module_name": "agentops.instrumentation.providers.anthropic",
"class_name": "AnthropicInstrumentor",
"min_version": "0.32.0",
},
"ibm_watsonx_ai": {
"module_name": "agentops.instrumentation.ibm_watsonx_ai",
"module_name": "agentops.instrumentation.providers.ibm_watsonx_ai",
"class_name": "IBMWatsonXInstrumentor",
"min_version": "0.1.0",
},
"google.genai": {
"module_name": "agentops.instrumentation.google_genai",
"module_name": "agentops.instrumentation.providers.google_genai",
"class_name": "GoogleGenAIInstrumentor",
"min_version": "0.1.0",
"package_name": "google-genai", # Actual pip package name
},
"mem0": {
"module_name": "agentops.instrumentation.mem0",
"class_name": "Mem0Instrumentor",
"min_version": "0.1.0",
"package_name": "mem0ai",
},
}

# Configuration for utility instrumentors
UTILITY_INSTRUMENTORS: dict[str, InstrumentorConfig] = {
"concurrent.futures": {
"module_name": "agentops.instrumentation.concurrent_futures",
"module_name": "agentops.instrumentation.utilities.concurrent_futures",
"class_name": "ConcurrentFuturesInstrumentor",
"min_version": "3.7.0", # Python 3.7+ (concurrent.futures is stdlib)
"package_name": "python", # Special case for stdlib modules
Expand All @@ -88,21 +82,31 @@ class InstrumentorConfig(TypedDict):
# Configuration for supported agentic libraries
AGENTIC_LIBRARIES: dict[str, InstrumentorConfig] = {
"crewai": {
"module_name": "agentops.instrumentation.crewai",
"module_name": "agentops.instrumentation.frameworks.crewai",
"class_name": "CrewAIInstrumentor",
"min_version": "0.56.0",
},
"autogen": {"module_name": "agentops.instrumentation.ag2", "class_name": "AG2Instrumentor", "min_version": "0.1.0"},
"autogen": {
"module_name": "agentops.instrumentation.frameworks.ag2",
"class_name": "AG2Instrumentor",
"min_version": "0.1.0",
},
"agents": {
"module_name": "agentops.instrumentation.openai_agents",
"module_name": "agentops.instrumentation.frameworks.openai_agents",
"class_name": "OpenAIAgentsInstrumentor",
"min_version": "0.0.1",
},
"google.adk": {
"module_name": "agentops.instrumentation.google_adk",
"module_name": "agentops.instrumentation.frameworks.google_adk",
"class_name": "GoogleADKInstrumentor",
"min_version": "0.1.0",
},
"mem0": {
"module_name": "agentops.instrumentation.frameworks.mem0",
"class_name": "Mem0Instrumentor",
"min_version": "0.1.0",
"package_name": "mem0ai",
},
}

# Combine all target packages for monitoring
Expand Down
Loading
Loading