Skip to content

Commit f51036e

Browse files
added tags to langchain
1 parent d1c376f commit f51036e

File tree

2 files changed

+418
-0
lines changed

2 files changed

+418
-0
lines changed

docs/my-website/docs/langchain/langchain.md

Lines changed: 318 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,3 +162,321 @@ Get more details [here](../observability/lunary_integration.md)
162162

163163
## Use LangChain ChatLiteLLM + Langfuse
164164
Checkout this section [here](../observability/langfuse_integration#use-langchain-chatlitellm--langfuse) for more details on how to integrate Langfuse with ChatLiteLLM.
165+
166+
## Using Tags with LangChain and LiteLLM
167+
168+
Tags are a powerful feature in LiteLLM that allow you to categorize, filter, and track your LLM requests. When using LangChain with LiteLLM, you can pass tags through the `extra_body` parameter in the metadata.
169+
170+
### Basic Tag Usage
171+
172+
<Tabs>
173+
<TabItem value="openai" label="OpenAI">
174+
175+
```python
176+
import os
177+
from langchain_openai import ChatOpenAI
178+
from langchain_core.messages import HumanMessage, SystemMessage
179+
180+
os.environ['OPENAI_API_KEY'] = "sk-your-key-here"
181+
182+
chat = ChatOpenAI(
183+
model="gpt-4o",
184+
temperature=0.7,
185+
extra_body={
186+
"metadata": {
187+
"tags": ["production", "customer-support", "high-priority"]
188+
}
189+
}
190+
)
191+
192+
messages = [
193+
SystemMessage(content="You are a helpful customer support assistant."),
194+
HumanMessage(content="How do I reset my password?")
195+
]
196+
197+
response = chat.invoke(messages)
198+
print(response)
199+
```
200+
201+
</TabItem>
202+
203+
<TabItem value="anthropic" label="Anthropic">
204+
205+
```python
206+
import os
207+
from langchain_openai import ChatOpenAI
208+
from langchain_core.messages import HumanMessage, SystemMessage
209+
210+
os.environ['ANTHROPIC_API_KEY'] = "sk-ant-your-key-here"
211+
212+
chat = ChatOpenAI(
213+
model="claude-3-sonnet-20240229",
214+
temperature=0.7,
215+
extra_body={
216+
"metadata": {
217+
"tags": ["research", "analysis", "claude-model"]
218+
}
219+
}
220+
)
221+
222+
messages = [
223+
SystemMessage(content="You are a research analyst."),
224+
HumanMessage(content="Analyze this market trend...")
225+
]
226+
227+
response = chat.invoke(messages)
228+
print(response)
229+
```
230+
231+
</TabItem>
232+
233+
<TabItem value="litellm-proxy" label="LiteLLM Proxy">
234+
235+
```python
236+
import os
237+
from langchain_openai import ChatOpenAI
238+
from langchain_core.messages import HumanMessage, SystemMessage
239+
240+
# No API key needed when using proxy
241+
chat = ChatOpenAI(
242+
openai_api_base="http://localhost:4000", # Your proxy URL
243+
model="gpt-4o",
244+
temperature=0.7,
245+
extra_body={
246+
"metadata": {
247+
"tags": ["proxy", "team-alpha", "feature-flagged"],
248+
"generation_name": "customer-onboarding",
249+
"trace_user_id": "user-12345"
250+
}
251+
}
252+
)
253+
254+
messages = [
255+
SystemMessage(content="You are an onboarding assistant."),
256+
HumanMessage(content="Welcome our new customer!")
257+
]
258+
259+
response = chat.invoke(messages)
260+
print(response)
261+
```
262+
263+
</TabItem>
264+
</Tabs>
265+
266+
### Advanced Tag Patterns
267+
268+
#### Dynamic Tags Based on Context
269+
270+
```python
271+
import os
272+
from langchain_openai import ChatOpenAI
273+
from langchain_core.messages import HumanMessage, SystemMessage
274+
275+
def create_chat_with_tags(user_type: str, feature: str):
276+
"""Create a chat instance with dynamic tags based on context"""
277+
278+
# Build tags dynamically
279+
tags = ["langchain-integration"]
280+
281+
if user_type == "premium":
282+
tags.extend(["premium-user", "high-priority"])
283+
elif user_type == "enterprise":
284+
tags.extend(["enterprise", "custom-sla"])
285+
else:
286+
tags.append("standard-user")
287+
288+
# Add feature-specific tags
289+
if feature == "code-review":
290+
tags.extend(["development", "code-analysis"])
291+
elif feature == "content-gen":
292+
tags.extend(["marketing", "content-creation"])
293+
294+
return ChatOpenAI(
295+
openai_api_base="http://localhost:4000",
296+
model="gpt-4o",
297+
temperature=0.7,
298+
extra_body={
299+
"metadata": {
300+
"tags": tags,
301+
"user_type": user_type,
302+
"feature": feature,
303+
"trace_user_id": f"user-{user_type}-{feature}"
304+
}
305+
}
306+
)
307+
308+
# Usage examples
309+
premium_chat = create_chat_with_tags("premium", "code-review")
310+
enterprise_chat = create_chat_with_tags("enterprise", "content-gen")
311+
312+
messages = [HumanMessage(content="Help me with this task")]
313+
response = premium_chat.invoke(messages)
314+
```
315+
316+
#### Tags for Cost Tracking and Analytics
317+
318+
```python
319+
import os
320+
from langchain_openai import ChatOpenAI
321+
from langchain_core.messages import HumanMessage, SystemMessage
322+
323+
# Tags for cost tracking
324+
cost_tracking_chat = ChatOpenAI(
325+
openai_api_base="http://localhost:4000",
326+
model="gpt-4o",
327+
temperature=0.7,
328+
extra_body={
329+
"metadata": {
330+
"tags": [
331+
"cost-center-marketing",
332+
"budget-q4-2024",
333+
"project-launch-campaign",
334+
"high-cost-model" # Flag for expensive models
335+
],
336+
"department": "marketing",
337+
"project_id": "campaign-2024-q4",
338+
"cost_threshold": "high"
339+
}
340+
}
341+
)
342+
343+
messages = [
344+
SystemMessage(content="You are a marketing copywriter."),
345+
HumanMessage(content="Create compelling ad copy for our new product launch.")
346+
]
347+
348+
response = cost_tracking_chat.invoke(messages)
349+
```
350+
351+
#### Tags for A/B Testing
352+
353+
```python
354+
import os
355+
from langchain_openai import ChatOpenAI
356+
from langchain_core.messages import HumanMessage, SystemMessage
357+
import random
358+
359+
def create_ab_test_chat(test_variant: str = None):
360+
"""Create chat instance for A/B testing with appropriate tags"""
361+
362+
if test_variant is None:
363+
test_variant = random.choice(["variant-a", "variant-b"])
364+
365+
return ChatOpenAI(
366+
openai_api_base="http://localhost:4000",
367+
model="gpt-4o",
368+
temperature=0.7 if test_variant == "variant-a" else 0.9, # Different temp for variants
369+
extra_body={
370+
"metadata": {
371+
"tags": [
372+
"ab-test-experiment-1",
373+
f"variant-{test_variant}",
374+
"temperature-test",
375+
"user-experience"
376+
],
377+
"experiment_id": "ab-test-001",
378+
"variant": test_variant,
379+
"test_group": "temperature-optimization"
380+
}
381+
}
382+
)
383+
384+
# Run A/B test
385+
variant_a_chat = create_ab_test_chat("variant-a")
386+
variant_b_chat = create_ab_test_chat("variant-b")
387+
388+
test_message = [HumanMessage(content="Explain quantum computing in simple terms")]
389+
390+
response_a = variant_a_chat.invoke(test_message)
391+
response_b = variant_b_chat.invoke(test_message)
392+
```
393+
394+
### Tag Best Practices
395+
396+
#### 1. **Consistent Naming Convention**
397+
```python
398+
# ✅ Good: Consistent, descriptive tags
399+
tags = ["production", "api-v2", "customer-support", "urgent"]
400+
401+
# ❌ Avoid: Inconsistent or unclear tags
402+
tags = ["prod", "v2", "support", "urgent123"]
403+
```
404+
405+
#### 2. **Hierarchical Tags**
406+
```python
407+
# ✅ Good: Hierarchical structure
408+
tags = ["env:production", "team:backend", "service:api", "priority:high"]
409+
410+
# This allows for easy filtering and grouping
411+
```
412+
413+
#### 3. **Include Context Information**
414+
```python
415+
extra_body={
416+
"metadata": {
417+
"tags": ["production", "user-onboarding"],
418+
"user_id": "user-12345",
419+
"session_id": "session-abc123",
420+
"feature_flag": "new-onboarding-flow",
421+
"environment": "production"
422+
}
423+
}
424+
```
425+
426+
#### 4. **Tag Categories**
427+
Consider organizing tags into categories:
428+
- **Environment**: `production`, `staging`, `development`
429+
- **Team/Service**: `backend`, `frontend`, `api`, `worker`
430+
- **Feature**: `authentication`, `payment`, `notification`
431+
- **Priority**: `critical`, `high`, `medium`, `low`
432+
- **User Type**: `premium`, `enterprise`, `free`
433+
434+
### Using Tags with LiteLLM Proxy
435+
436+
When using tags with LiteLLM Proxy, you can:
437+
438+
1. **Filter requests** based on tags
439+
2. **Track costs** by tags in spend reports
440+
3. **Apply routing rules** based on tags
441+
4. **Monitor usage** with tag-based analytics
442+
443+
#### Example Proxy Configuration with Tags
444+
445+
```yaml
446+
# config.yaml
447+
model_list:
448+
- model_name: gpt-4o
449+
litellm_params:
450+
model: gpt-4o
451+
api_key: your-key
452+
453+
# Tag-based routing rules
454+
tag_routing:
455+
- tags: ["premium", "high-priority"]
456+
models: ["gpt-4o", "claude-3-opus"]
457+
- tags: ["standard"]
458+
models: ["gpt-3.5-turbo", "claude-3-haiku"]
459+
```
460+
461+
### Monitoring and Analytics
462+
463+
Tags enable powerful analytics capabilities:
464+
465+
```python
466+
# Example: Get spend reports by tags
467+
import requests
468+
469+
response = requests.get(
470+
"http://localhost:4000/global/spend/report",
471+
headers={"Authorization": "Bearer sk-your-key"},
472+
params={
473+
"start_date": "2024-01-01",
474+
"end_date": "2024-12-31",
475+
"group_by": "tags"
476+
}
477+
)
478+
479+
spend_by_tags = response.json()
480+
```
481+
482+
This documentation covers the essential patterns for using tags effectively with LangChain and LiteLLM, enabling better organization, tracking, and analytics of your LLM requests.

0 commit comments

Comments
 (0)