|
| 1 | +# OpenTelemetry LangChain Instrumentation |
| 2 | + |
| 3 | +This package provides OpenTelemetry instrumentation for LangChain applications, allowing you to automatically trace and monitor your LangChain workflows. For details on usage and installation of LoongSuite and Jaeger, please refer to [LoongSuite Documentation](https://github.com/alibaba/loongsuite-python-agent/blob/main/README.md). |
| 4 | + |
| 5 | +## Installation |
| 6 | + |
| 7 | +```bash |
| 8 | +git clone https://github.com/alibaba/loongsuite-python-agent.git |
| 9 | +cd loongsuite-python-agent |
| 10 | +pip install ./instrumentation-genai/opentelemetry-instrumentation-langchain |
| 11 | +``` |
| 12 | + |
| 13 | +## RUN |
| 14 | + |
| 15 | +### Build the Example |
| 16 | + |
| 17 | +Follow the official [LangChain Documentation](https://python.langchain.com/docs/introduction/) to create a sample file named `demo.py`. You can also experience the Tongyi model like me: https://python.langchain.com/docs/integrations/llms/tongyi/ |
| 18 | + |
| 19 | +```python |
| 20 | +from langchain_core.messages import HumanMessage, SystemMessage |
| 21 | +from langchain_community.llms.tongyi import Tongyi |
| 22 | + |
| 23 | +chatLLM = Tongyi(model="qwen-turbo") |
| 24 | +messages = [ |
| 25 | + SystemMessage( |
| 26 | + content="You are a helpful assistant that translates English to French." |
| 27 | + ), |
| 28 | + HumanMessage( |
| 29 | + content="Translate this sentence from English to French. I love programming." |
| 30 | + ), |
| 31 | +] |
| 32 | +res = chatLLM.invoke(messages) |
| 33 | +print(res) |
| 34 | +``` |
| 35 | + |
| 36 | +## Quick Start |
| 37 | + |
| 38 | +You can automatically instrument your LangChain application using the `opentelemetry-instrument` command: |
| 39 | + |
| 40 | +```bash |
| 41 | +export OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true |
| 42 | +opentelemetry-instrument \ |
| 43 | + --traces_exporter console \ |
| 44 | + --metrics_exporter console \ |
| 45 | + python your_langchain_app.py |
| 46 | +``` |
| 47 | +If everything is working correctly, you should see logs similar to the following |
| 48 | +```json |
| 49 | +{ |
| 50 | + "name": "Tongyi", |
| 51 | + "context": { |
| 52 | + "trace_id": "0x61d2c954558c3988f42770a946ea877e", |
| 53 | + "span_id": "0x7bb229d6f75e52ad", |
| 54 | + "trace_state": "[]" |
| 55 | + }, |
| 56 | + "kind": "SpanKind.INTERNAL", |
| 57 | + "parent_id": null, |
| 58 | + "start_time": "2025-08-14T07:30:38.783413Z", |
| 59 | + "end_time": "2025-08-14T07:30:39.321573Z", |
| 60 | + "status": { |
| 61 | + "status_code": "OK" |
| 62 | + }, |
| 63 | + "attributes": { |
| 64 | + "gen_ai.span.kind": "llm", |
| 65 | + "input.value": "{\"prompts\": [\"System: You are a helpful assistant that translates English to French.\\nHuman: Translate this sentence from English to French. I love programming.\"]}", |
| 66 | + "input.mime_type": "application/json", |
| 67 | + "output.value": "{\"generations\": [[{\"text\": \"J'adore la programmation.\", \"generation_info\": {\"finish_reason\": \"stop\", \"request_id\": \"463d2249-6424-9eef-8665-6ef88d4fcc7a\", \"token_usage\": {\"input_tokens\": 39, \"output_tokens\": 8, \"total_tokens\": 47, \"prompt_tokens_details\": {\"cached_tokens\": 0}}}, \"type\": \"Generation\"}]], \"llm_output\": {\"model_name\": \"qwen-turbo\"}, \"run\": null, \"type\": \"LLMResult\"}", |
| 68 | + "output.mime_type": "application/json", |
| 69 | + "gen_ai.prompt.0.content": "System: You are a helpful assistant that translates English to French.\nHuman: Translate this sentence from English to French. I love programming.", |
| 70 | + "gen_ai.response.finish_reasons": "stop", |
| 71 | + "gen_ai.usage.prompt_tokens": 39, |
| 72 | + "gen_ai.usage.completion_tokens": 8, |
| 73 | + "gen_ai.usage.total_tokens": 47, |
| 74 | + "gen_ai.completion": [ |
| 75 | + "J'adore la programmation." |
| 76 | + ], |
| 77 | + "gen_ai.response.model": "qwen-turbo", |
| 78 | + "gen_ai.request.model": "qwen-turbo", |
| 79 | + "metadata": "{\"ls_provider\": \"tongyi\", \"ls_model_type\": \"llm\", \"ls_model_name\": \"qwen-turbo\"}" |
| 80 | + }, |
| 81 | + "events": [], |
| 82 | + "links": [], |
| 83 | + "resource": { |
| 84 | + "attributes": { |
| 85 | + "telemetry.sdk.language": "python", |
| 86 | + "telemetry.sdk.name": "opentelemetry", |
| 87 | + "telemetry.sdk.version": "1.35.0", |
| 88 | + "service.name": "langchain_loon", |
| 89 | + "telemetry.auto.version": "0.56b0" |
| 90 | + }, |
| 91 | + "schema_url": "" |
| 92 | + } |
| 93 | +} |
| 94 | + |
| 95 | +``` |
| 96 | + |
| 97 | +## Forwarding OTLP Data to the Backend |
| 98 | +```shell |
| 99 | +export OTEL_SERVICE_NAME=<service_name> |
| 100 | +export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf |
| 101 | +export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=<trace_endpoint> |
| 102 | +export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=<metrics_endpoint> |
| 103 | + |
| 104 | +export OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true |
| 105 | + |
| 106 | +opentelemetry-instrument <your_run_command> |
| 107 | + |
| 108 | +``` |
| 109 | + |
| 110 | + |
| 111 | +## Requirements |
| 112 | + |
| 113 | +- Python >= 3.8 |
| 114 | +- LangChain >= 0.1.0 |
| 115 | +- OpenTelemetry >= 1.20.0 |
| 116 | + |
| 117 | +## Contributing |
| 118 | + |
| 119 | +Contributions are welcome! Please feel free to submit a Pull Request. |
| 120 | + |
| 121 | +## Acknowledgments |
| 122 | + |
| 123 | +This instrumentation was inspired by and builds upon the excellent work done by the [OpenInference](https://github.com/Arize-ai/openinference) project. We acknowledge their contributions to the OpenTelemetry instrumentation ecosystem for AI/ML frameworks. |
| 124 | + |
| 125 | +## License |
| 126 | + |
| 127 | +This project is licensed under the Apache License 2.0. |
0 commit comments