|
| 1 | +## OpenTelemetry Instrumentation in Mellea |
| 2 | + |
| 3 | +Mellea provides built-in OpenTelemetry instrumentation with two independent trace scopes that can be enabled separately. The instrumentation follows the [OpenTelemetry Gen-AI Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/) for standardized observability across LLM applications. |
| 4 | + |
| 5 | +**Note**: OpenTelemetry is an optional dependency. If not installed, telemetry features are automatically disabled with no impact on functionality. |
| 6 | + |
| 7 | +1. **Application Trace** (`mellea.application`) - Tracks user-facing operations |
| 8 | +2. **Backend Trace** (`mellea.backend`) - Tracks LLM backend interactions with Gen-AI semantic conventions |
| 9 | + |
| 10 | +### Installation |
| 11 | + |
| 12 | +To use telemetry features, install Mellea with OpenTelemetry support: |
| 13 | + |
| 14 | +```bash |
| 15 | +pip install mellea[telemetry] |
| 16 | +# or |
| 17 | +uv pip install mellea[telemetry] |
| 18 | +``` |
| 19 | + |
| 20 | +Without the `[telemetry]` extra, Mellea works normally but telemetry features are disabled. |
| 21 | + |
| 22 | +### Configuration |
| 23 | + |
| 24 | +Telemetry is configured via environment variables: |
| 25 | + |
| 26 | +| Variable | Description | Default | |
| 27 | +|----------|-------------|---------| |
| 28 | +| `MELLEA_TRACE_APPLICATION` | Enable application-level tracing | `false` | |
| 29 | +| `MELLEA_TRACE_BACKEND` | Enable backend-level tracing | `false` | |
| 30 | +| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP endpoint for trace export | None | |
| 31 | +| `OTEL_SERVICE_NAME` | Service name for traces | `mellea` | |
| 32 | +| `MELLEA_TRACE_CONSOLE` | Print traces to console (debugging) | `false` | |
| 33 | + |
| 34 | +### Application Trace Scope |
| 35 | + |
| 36 | +The application tracer (`mellea.application`) instruments: |
| 37 | + |
| 38 | +- **Session lifecycle**: `start_session()`, session context manager entry/exit |
| 39 | +- **@generative functions**: Execution of functions decorated with `@generative` |
| 40 | +- **mfuncs.aact()**: Action execution with requirements and sampling strategies |
| 41 | +- **Sampling strategies**: Rejection sampling, budget forcing, etc. |
| 42 | +- **Requirement validation**: Validation of requirements and constraints |
| 43 | + |
| 44 | +**Span attributes include:** |
| 45 | +- `backend`: Backend class name |
| 46 | +- `model_id`: Model identifier |
| 47 | +- `context_type`: Context class name |
| 48 | +- `action_type`: Component type being executed |
| 49 | +- `has_requirements`: Whether requirements are specified |
| 50 | +- `has_strategy`: Whether a sampling strategy is used |
| 51 | +- `strategy_type`: Sampling strategy class name |
| 52 | +- `num_generate_logs`: Number of generation attempts |
| 53 | +- `sampling_success`: Whether sampling succeeded |
| 54 | +- `response`: Model response (truncated to 500 chars) |
| 55 | +- `response_length`: Full length of model response |
| 56 | + |
| 57 | +### Backend Trace Scope |
| 58 | + |
| 59 | +The backend tracer (`mellea.backend`) instruments LLM interactions following [OpenTelemetry Gen-AI Semantic Conventions](https://opentelemetry.io/docs/specs/semconv/gen-ai/): |
| 60 | + |
| 61 | +- **Backend.generate_from_context()**: Context-based generation (chat operations) |
| 62 | +- **Backend.generate_from_raw()**: Raw generation without context (text completions) |
| 63 | +- **Backend-specific implementations**: Ollama, OpenAI, HuggingFace, Watsonx, LiteLLM |
| 64 | + |
| 65 | +**Gen-AI Semantic Convention Attributes:** |
| 66 | +- `gen_ai.system`: LLM system name (e.g., `openai`, `ollama`, `huggingface`) |
| 67 | +- `gen_ai.request.model`: Model identifier used for the request |
| 68 | +- `gen_ai.response.model`: Actual model used in the response (may differ from request) |
| 69 | +- `gen_ai.operation.name`: Operation type (`chat` or `text_completion`) |
| 70 | +- `gen_ai.usage.input_tokens`: Number of input tokens consumed |
| 71 | +- `gen_ai.usage.output_tokens`: Number of output tokens generated |
| 72 | +- `gen_ai.usage.total_tokens`: Total tokens consumed |
| 73 | +- `gen_ai.response.id`: Response ID from the LLM provider |
| 74 | +- `gen_ai.response.finish_reasons`: List of finish reasons (e.g., `["stop"]`, `["length"]`) |
| 75 | + |
| 76 | +**Mellea-Specific Attributes:** |
| 77 | +- `mellea.backend`: Backend class name (e.g., `OpenAIBackend`) |
| 78 | +- `mellea.action_type`: Component type being executed |
| 79 | +- `mellea.context_size`: Number of items in context |
| 80 | +- `mellea.has_format`: Whether structured output format is specified |
| 81 | +- `mellea.format_type`: Response format class name |
| 82 | +- `mellea.tool_calls_enabled`: Whether tool calling is enabled |
| 83 | +- `mellea.num_actions`: Number of actions in batch (for `generate_from_raw`) |
| 84 | + |
| 85 | +### Usage Examples |
| 86 | + |
| 87 | +#### Enable Application Tracing Only |
| 88 | + |
| 89 | +```bash |
| 90 | +export MELLEA_TRACE_APPLICATION=true |
| 91 | +export MELLEA_TRACE_BACKEND=false |
| 92 | +python docs/examples/instruct_validate_repair/101_email.py |
| 93 | +``` |
| 94 | + |
| 95 | +This traces user-facing operations like `@generative` function calls, session lifecycle, and sampling strategies, but not the underlying LLM API calls. |
| 96 | + |
| 97 | +#### Enable Backend Tracing Only |
| 98 | + |
| 99 | +```bash |
| 100 | +export MELLEA_TRACE_APPLICATION=false |
| 101 | +export MELLEA_TRACE_BACKEND=true |
| 102 | +python docs/examples/instruct_validate_repair/101_email.py |
| 103 | +``` |
| 104 | + |
| 105 | +This traces only the LLM backend interactions, showing model calls, token usage, and API latency. |
| 106 | + |
| 107 | +#### Enable Both Traces |
| 108 | + |
| 109 | +```bash |
| 110 | +export MELLEA_TRACE_APPLICATION=true |
| 111 | +export MELLEA_TRACE_BACKEND=true |
| 112 | +python docs/examples/instruct_validate_repair/101_email.py |
| 113 | +``` |
| 114 | + |
| 115 | +This provides complete observability across both application logic and backend interactions. |
| 116 | + |
| 117 | +#### Export to Jaeger |
| 118 | + |
| 119 | +```bash |
| 120 | +# Start Jaeger (example using Docker) |
| 121 | +docker run -d --name jaeger \ |
| 122 | + -p 4317:4317 \ |
| 123 | + -p 16686:16686 \ |
| 124 | + jaegertracing/all-in-one:latest |
| 125 | + |
| 126 | +# Configure Mellea to export traces |
| 127 | +export MELLEA_TRACE_APPLICATION=true |
| 128 | +export MELLEA_TRACE_BACKEND=true |
| 129 | +export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 |
| 130 | +export OTEL_SERVICE_NAME=my-mellea-app |
| 131 | + |
| 132 | +python docs/examples/instruct_validate_repair/101_email.py |
| 133 | + |
| 134 | +# View traces at http://localhost:16686 |
| 135 | +``` |
| 136 | + |
| 137 | +#### Console Output for Debugging |
| 138 | + |
| 139 | +```bash |
| 140 | +export MELLEA_TRACE_APPLICATION=true |
| 141 | +export MELLEA_TRACE_CONSOLE=true |
| 142 | +python docs/examples/instruct_validate_repair/101_email.py |
| 143 | +``` |
| 144 | + |
| 145 | +This prints trace spans to the console, useful for local debugging without setting up a trace backend. |
| 146 | + |
| 147 | +### Programmatic Access |
| 148 | + |
| 149 | +You can check if tracing is enabled in your code: |
| 150 | + |
| 151 | +```python |
| 152 | +from mellea.telemetry import ( |
| 153 | + is_application_tracing_enabled, |
| 154 | + is_backend_tracing_enabled, |
| 155 | +) |
| 156 | + |
| 157 | +if is_application_tracing_enabled(): |
| 158 | + print("Application tracing is enabled") |
| 159 | + |
| 160 | +if is_backend_tracing_enabled(): |
| 161 | + print("Backend tracing is enabled") |
| 162 | +``` |
| 163 | + |
| 164 | +### Performance Considerations |
| 165 | + |
| 166 | +- **Zero overhead when disabled**: When tracing is disabled (default), there is minimal performance impact |
| 167 | +- **Async-friendly**: Tracing works seamlessly with async operations |
| 168 | +- **Batched export**: Traces are exported in batches to minimize network overhead |
| 169 | +- **Separate scopes**: Enable only the tracing you need to reduce overhead |
| 170 | + |
| 171 | +### Integration with Observability Tools |
| 172 | + |
| 173 | +Mellea's OpenTelemetry instrumentation works with any OTLP-compatible backend: |
| 174 | + |
| 175 | +- **Jaeger**: Distributed tracing |
| 176 | +- **Zipkin**: Distributed tracing |
| 177 | +- **Grafana Tempo**: Distributed tracing |
| 178 | +- **Honeycomb**: Observability platform |
| 179 | +- **Datadog**: APM and observability |
| 180 | +- **New Relic**: APM and observability |
| 181 | +- **AWS X-Ray**: Distributed tracing (via OTLP) |
| 182 | +- **Google Cloud Trace**: Distributed tracing (via OTLP) |
| 183 | + |
| 184 | +### Example Trace Hierarchy |
| 185 | + |
| 186 | +When both traces are enabled, you'll see a hierarchy like: |
| 187 | + |
| 188 | +``` |
| 189 | +session_context (application) |
| 190 | +├── aact (application) |
| 191 | +│ ├── chat (backend) [gen_ai.system=ollama, gen_ai.request.model=llama3.2] |
| 192 | +│ │ └── [gen_ai.usage.input_tokens=150, gen_ai.usage.output_tokens=50] |
| 193 | +│ └── requirement_validation (application) |
| 194 | +├── aact (application) |
| 195 | +│ └── chat (backend) [gen_ai.system=openai, gen_ai.request.model=gpt-4] |
| 196 | +│ └── [gen_ai.usage.input_tokens=200, gen_ai.usage.output_tokens=75] |
| 197 | +``` |
| 198 | + |
| 199 | +The Gen-AI semantic conventions make it easy to: |
| 200 | +- Track token usage across different LLM providers |
| 201 | +- Compare performance between models |
| 202 | +- Monitor costs based on token consumption |
| 203 | +- Identify which operations consume the most tokens |
| 204 | + |
| 205 | +### Troubleshooting |
| 206 | + |
| 207 | +**Traces not appearing:** |
| 208 | +1. Verify environment variables are set correctly |
| 209 | +2. Check that OTLP endpoint is reachable |
| 210 | +3. Enable console output to verify traces are being created |
| 211 | +4. Check firewall/network settings |
| 212 | + |
| 213 | +**High overhead:** |
| 214 | +1. Disable application tracing if you only need backend metrics |
| 215 | +2. Reduce sampling rate (future feature) |
| 216 | +3. Use a local OTLP collector to batch exports |
| 217 | + |
| 218 | +**Missing spans:** |
| 219 | +1. Ensure you're using `with start_session()` context manager |
| 220 | +2. Check that async operations are properly awaited |
| 221 | +3. Verify backend implementation has instrumentation |
| 222 | + |
| 223 | +### Future Enhancements |
| 224 | + |
| 225 | +Planned improvements to telemetry: |
| 226 | + |
| 227 | +- Sampling rate configuration |
| 228 | +- Custom span attributes via decorators |
| 229 | +- Metrics export (token counts, latency percentiles) |
| 230 | +- Trace context propagation for distributed systems |
| 231 | +- Integration with LangSmith and other LLM observability tools |
0 commit comments