Skip to content

Commit 20b8c07

Browse files
nagkumar91Nagkumar ArkalgudNagkumar ArkalgudNagkumar Arkalgud
authored
Task/add tracing samples (#279)
* Update evaluate-models-target.ipynb * Update sample to include instructions of getting data from search index and use model_config for the latest version of the simulator * remove azure_ai_project * Fix to param name * Remove redundant file * change the azure_ai_project * Samples for tracing * Rename tracer and add env samples * remove scope and endpoint --------- Co-authored-by: Nagkumar Arkalgud <[email protected]> Co-authored-by: Nagkumar Arkalgud <[email protected]> Co-authored-by: Nagkumar Arkalgud <[email protected]>
1 parent 62050be commit 20b8c07

File tree

18 files changed

+976
-5
lines changed

18 files changed

+976
-5
lines changed

scenarios/agent-tracing/README.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Agent Tracing
2+
3+
Reasoning about agent executions is critical for troubleshooting and debugging. Complex agents can involve many nested steps, variable execution paths, and long inputs/outputs, which makes it hard to pinpoint issues. Tracing provides a clear, chronological view of the inputs and outputs for each primitive involved in a run.
4+
5+
This scenario sets up a structure for agent tracing using OpenTelemetry. It supports:
6+
- Local tracing via console or any OTLP-compatible backend (e.g., Aspire Dashboard).
7+
- Cloud tracing via Azure Monitor when Application Insights is enabled for your Azure AI Studio project.
8+
9+
## Structure
10+
- `langchain/`: Tracing patterns for LangChain flows.
11+
- `langgraph/`: Tracing patterns for LangGraph agents/graphs.
12+
- `openai-agents/`: Tracing for OpenAI Agents with Azure OpenAI.
13+
14+
Each subfolder contains its own `requirements.txt`, optional `dev-requirements.txt`, and `.env.sample` tailored for that sample.
15+
16+
## Prerequisites
17+
- Python 3.10+ recommended.
18+
- An Azure AI Studio project (optional, for Azure Monitor tracing).
19+
- If using Azure Monitor, enable the Tracing tab in your AI Studio project to provision Application Insights and retrieve the connection string.
20+
21+
## Installation
22+
Navigate to a subfolder and install its `requirements.txt`. Use `dev-requirements.txt` if you want Azure Monitor integrations.
23+
24+
## Configuration
25+
- Local OTLP exporter:
26+
- Set `OTEL_EXPORTER_OTLP_ENDPOINT` to your backend (e.g., Aspire Dashboard default: `http://localhost:4317` for gRPC, or `http://localhost:4318` for HTTP).
27+
- Azure Monitor:
28+
- Copy the subfolder `.env.sample` to `.env` and set required values.
29+
30+
## Notes
31+
- The initial release of Azure AI Projects had a known issue where agent function tool call details might be included in traces even when content recording is disabled. Be cautious with sensitive data and review Tracing settings in AI Studio.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Azure OpenAI configuration (required)
2+
AZURE_OPENAI_API_KEY=
3+
AZURE_OPENAI_ENDPOINT=
4+
AZURE_OPENAI_DEPLOYMENT=
5+
AZURE_OPENAI_API_VERSION=2024-02-15-preview
6+
7+
# Optional: Enable Azure Monitor tracing via Application Insights
8+
APPLICATION_INSIGHTS_CONNECTION_STRING=
9+
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
---
2+
page_type: sample
3+
languages:
4+
- python
5+
products:
6+
- ai-services
7+
- azure-openai
8+
description: Pure LangChain weather assistant with Azure tracing and manual tool-calling loop.
9+
---
10+
11+
## Pure LangChain Weather (Tracing)
12+
13+
### Overview
14+
15+
This sample demonstrates a pure LangChain agent that uses a manual tool-calling loop, instrumented with Azure Application Insights via `langchain-azure-ai`. It calls Azure OpenAI chat models and a simple `get_weather` tool, and emits OpenTelemetry traces locally or to Azure Monitor.
16+
17+
### Objective
18+
19+
- Use `langchain` and `langchain-openai` with Azure OpenAI chat models.
20+
- Add tracing via `langchain-azure-ai` and OpenTelemetry.
21+
- Implement a manual tool-calling loop for clarity and control.
22+
23+
### Programming Languages
24+
25+
- Python
26+
27+
### Estimated Runtime: 10 mins
28+
29+
## Set up
30+
31+
Create and activate a local virtual environment, then install dependencies:
32+
33+
```
34+
python -m venv .venv
35+
source .venv/bin/activate # Windows: .venv\\Scripts\\activate
36+
pip install -r requirements.txt
37+
```
38+
39+
Copy the environment template and set required variables:
40+
41+
```
42+
cp .env.sample .env
43+
```
44+
45+
Required:
46+
47+
- `AZURE_OPENAI_API_KEY`
48+
- `AZURE_OPENAI_ENDPOINT`
49+
- `AZURE_OPENAI_DEPLOYMENT`
50+
- `AZURE_OPENAI_API_VERSION` (default `2024-02-15-preview`)
51+
52+
Optional for tracing:
53+
54+
- `APPLICATION_INSIGHTS_CONNECTION_STRING`
55+
56+
## Run
57+
58+
```
59+
python weather.py
60+
```
61+
62+
You can send traces to an OTLP-compatible backend by setting `OTEL_EXPORTER_OTLP_ENDPOINT`, or to Azure Monitor by setting `APPLICATION_INSIGHTS_CONNECTION_STRING`.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Full dev setup (optional)
2+
langchain
3+
langchain-openai
4+
langchain-azure-ai
5+
opentelemetry-api
6+
opentelemetry-sdk
7+
opentelemetry-exporter-otlp
8+
python-dotenv
9+
azure-monitor-opentelemetry
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
langchain
2+
langchain-openai
3+
langchain-azure-ai
4+
opentelemetry-api
5+
opentelemetry-sdk
6+
opentelemetry-exporter-otlp
7+
python-dotenv
Lines changed: 215 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,215 @@
1+
"""
2+
LangChain Weather Assistant with manual tool-calling loop + Azure tracing.
3+
4+
Env vars required:
5+
AZURE_OPENAI_API_KEY=...
6+
AZURE_OPENAI_ENDPOINT=https://YOUR-RESOURCE.openai.azure.com
7+
AZURE_OPENAI_DEPLOYMENT=yourDeploymentName
8+
AZURE_OPENAI_API_VERSION=2024-02-15-preview (or compatible)
9+
10+
Optional tracing:
11+
APPLICATION_INSIGHTS_CONNECTION_STRING=InstrumentationKey=...;IngestionEndpoint=...
12+
13+
Run:
14+
python weather.py
15+
"""
16+
17+
import os
18+
import json
19+
import logging
20+
from datetime import datetime
21+
from typing import List, Any, Optional, Dict
22+
23+
from langchain_core.tools import tool
24+
from langchain_core.messages import (
25+
SystemMessage,
26+
HumanMessage,
27+
AIMessage,
28+
ToolMessage,
29+
BaseMessage,
30+
)
31+
from langchain_openai import AzureChatOpenAI
32+
33+
try:
34+
from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer
35+
except ImportError:
36+
AzureAIOpenTelemetryTracer = None
37+
38+
logging.basicConfig(level=logging.INFO)
39+
logger = logging.getLogger("langchain_weather")
40+
41+
42+
# -----------------------------------------------------------------------------
43+
# Tracing Setup (cached)
44+
# -----------------------------------------------------------------------------
45+
_TRACERS: Optional[List[Any]] = None
46+
47+
48+
def setup_tracing() -> List[Any]:
49+
global _TRACERS
50+
if _TRACERS is not None:
51+
return _TRACERS
52+
tracers: List[Any] = []
53+
conn = os.getenv("APPLICATION_INSIGHTS_CONNECTION_STRING")
54+
if conn and AzureAIOpenTelemetryTracer:
55+
try:
56+
tracer = AzureAIOpenTelemetryTracer(
57+
connection_string=conn,
58+
enable_content_recording=True,
59+
name="langchain_weather",
60+
id="weather_agent",
61+
)
62+
tracers.append(tracer)
63+
logger.info("Azure tracing enabled.")
64+
except Exception as e:
65+
logger.warning(f"Failed to init tracer: {e}")
66+
else:
67+
logger.info("Tracing not enabled (missing APPLICATION_INSIGHTS_CONNECTION_STRING or dependency).")
68+
_TRACERS = tracers
69+
return tracers
70+
71+
72+
def trace_config(agent_name: str, session_id: str) -> Dict[str, Any]:
73+
tracers = setup_tracing()
74+
return {
75+
"callbacks": tracers,
76+
"tags": [f"agent:{agent_name}", agent_name, "weather-langchain"],
77+
"metadata": {
78+
"agent_name": agent_name,
79+
"agent_type": agent_name,
80+
"langgraph_node": agent_name, # kept for parity
81+
"session_id": session_id,
82+
"thread_id": session_id,
83+
"system": "langchain-weather",
84+
},
85+
}
86+
87+
88+
# -----------------------------------------------------------------------------
89+
# Tool
90+
# -----------------------------------------------------------------------------
91+
@tool
92+
def get_weather(location: str, date: Optional[str] = None) -> str:
93+
"""
94+
Return a mock weather forecast as JSON.
95+
"""
96+
if not date:
97+
date = datetime.utcnow().strftime("%Y-%m-%d")
98+
seed = sum(ord(c) for c in location.lower()) % 5
99+
conditions = ["Sunny", "Partly Cloudy", "Light Rain", "Overcast", "Showers"]
100+
cond = conditions[seed]
101+
forecast = {
102+
"location": location,
103+
"date": date,
104+
"condition": cond,
105+
"temp_high_c": 24 + seed,
106+
"temp_low_c": 14 + seed,
107+
"advice": "Great day outside!" if cond == "Sunny" else "Plan for changing conditions.",
108+
}
109+
return json.dumps(forecast, indent=2)
110+
111+
112+
TOOLS = [get_weather]
113+
TOOLS_BY_NAME = {t.name: t for t in TOOLS}
114+
115+
116+
# -----------------------------------------------------------------------------
117+
# LLM Factory
118+
# -----------------------------------------------------------------------------
119+
def build_llm(session_id: str) -> AzureChatOpenAI:
120+
required = [
121+
"AZURE_OPENAI_API_KEY",
122+
"AZURE_OPENAI_ENDPOINT",
123+
"AZURE_OPENAI_DEPLOYMENT",
124+
]
125+
missing = [v for v in required if not os.getenv(v)]
126+
if missing:
127+
raise RuntimeError(f"Missing Azure OpenAI env vars: {', '.join(missing)}")
128+
return AzureChatOpenAI(
129+
api_key=os.environ["AZURE_OPENAI_API_KEY"],
130+
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
131+
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT"],
132+
api_version=os.getenv("AZURE_OPENAI_API_VERSION", "2024-02-15-preview"),
133+
temperature=0.2,
134+
callbacks=setup_tracing(),
135+
tags=["weather_agent", "weather-langchain"],
136+
metadata={
137+
"agent_type": "weather_agent",
138+
"agent_name": "weather_agent",
139+
"system": "langchain-weather",
140+
"session_id": session_id,
141+
"thread_id": session_id,
142+
},
143+
)
144+
145+
146+
SYSTEM_PROMPT = """You are a weather assistant.
147+
If user asks about weather, call the get_weather tool with (location, date if given).
148+
If ambiguous date, assume tomorrow.
149+
After tool output, summarize succinctly for the user.
150+
"""
151+
152+
153+
# -----------------------------------------------------------------------------
154+
# Agent Loop (manual)
155+
# -----------------------------------------------------------------------------
156+
def run_weather_conversation(user_query: str, session_id: str) -> str:
157+
llm = build_llm(session_id)
158+
# Bind tools for tool-calling (function-calling) capability
159+
tool_llm = llm.bind_tools(TOOLS)
160+
161+
messages: List[BaseMessage] = [
162+
SystemMessage(content=SYSTEM_PROMPT),
163+
HumanMessage(content=user_query),
164+
]
165+
166+
# We allow up to N reasoning/tool steps (simple guard)
167+
for step in range(5):
168+
logger.info(f"LLM step {step + 1}")
169+
response: AIMessage = tool_llm.invoke(messages, config=trace_config("weather_agent", session_id))
170+
messages.append(response)
171+
172+
# If the model decided not to call any tools, we stop
173+
tool_calls = getattr(response, "tool_calls", None)
174+
if not tool_calls:
175+
logger.info("No tool calls; finishing.")
176+
break
177+
178+
# Execute each tool call and append ToolMessage
179+
for tc in tool_calls:
180+
name = tc["name"]
181+
args = tc.get("args", {})
182+
tool_obj = TOOLS_BY_NAME.get(name)
183+
if not tool_obj:
184+
tool_output = f"Tool '{name}' not found."
185+
else:
186+
try:
187+
tool_output = tool_obj.invoke(args)
188+
except Exception as e:
189+
tool_output = f"Error executing tool '{name}': {e}"
190+
messages.append(
191+
ToolMessage(
192+
content=tool_output,
193+
name=name,
194+
tool_call_id=tc["id"],
195+
)
196+
)
197+
198+
# Final answer: last AI message with no tool calls OR last AI message overall
199+
final_ai = next((m for m in reversed(messages) if isinstance(m, AIMessage)), None)
200+
return final_ai.content if final_ai else "No AI response."
201+
202+
203+
def main() -> None:
204+
print("Pure LangChain Weather (Instrumented)")
205+
q = input("Ask a weather question (e.g. 'Weather in Tokyo tomorrow'): ").strip()
206+
if not q:
207+
q = "Weather in Paris"
208+
session_id = f"lc-session-{datetime.utcnow().strftime('%Y%m%d%H%M%S')}"
209+
answer = run_weather_conversation(q, session_id)
210+
print("\n--- Answer ---")
211+
print(answer)
212+
213+
214+
if __name__ == "__main__":
215+
main()
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Azure OpenAI configuration (required)
2+
AZURE_OPENAI_API_KEY=
3+
AZURE_OPENAI_ENDPOINT=
4+
AZURE_OPENAI_DEPLOYMENT=
5+
AZURE_OPENAI_API_VERSION=2024-02-15-preview
6+
7+
# Optional: Enable Azure Monitor tracing via Application Insights
8+
APPLICATION_INSIGHTS_CONNECTION_STRING=
9+

0 commit comments

Comments
 (0)