Skip to content

Commit 4920395

Browse files
committed
Merge branch 'development' into release
2 parents 32fac05 + 79d51d8 commit 4920395

File tree

22 files changed

+458
-59
lines changed

22 files changed

+458
-59
lines changed

README.md

Lines changed: 30 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,10 @@ langtrace.init(custom_remote_exporter=<your_exporter>, batch=<True or False>)
148148
| `api_host` | `Optional[str]` | `https://langtrace.ai/` | The API host for the remote exporter. |
149149
| `disable_instrumentations` | `Optional[DisableInstrumentations]` | `None` | You can pass an object to disable instrumentation for specific vendors ex: `{'only': ['openai']}` or `{'all_except': ['openai']}` |
150150

151+
### Error Reporting to Langtrace
152+
153+
By default all sdk errors are reported to langtrace via Sentry. This can be disabled by setting the following enviroment variable to `False` like so `LANGTRACE_ERROR_REPORTING=False`
154+
151155
### Additional Customization
152156

153157
- `@with_langtrace_root_span` - this decorator is designed to organize and relate different spans, in a hierarchical manner. When you're performing multiple operations that you want to monitor together as a unit, this function helps by establishing a "parent" (`LangtraceRootSpan` or whatever is passed to `name`) span. Then, any calls to the LLM APIs made within the given function (fn) will be considered "children" of this parent span. This setup is especially useful for tracking the performance or behavior of a group of operations collectively, rather than individually.
@@ -229,6 +233,7 @@ prompt = get_prompt_from_registry(<Registry ID>, options={"prompt_version": 1, "
229233
```
230234

231235
### Opt out of tracing prompt and completion data
236+
232237
By default, prompt and completion data are captured. If you would like to opt out of it, set the following env var,
233238

234239
`TRACE_PROMPT_COMPLETION_DATA=false`
@@ -237,30 +242,31 @@ By default, prompt and completion data are captured. If you would like to opt ou
237242

238243
Langtrace automatically captures traces from the following vendors:
239244

240-
| Vendor | Type | Typescript SDK | Python SDK |
241-
| ------------ | --------------- | ------------------ | ------------------------------- |
242-
| OpenAI | LLM | :white_check_mark: | :white_check_mark: |
243-
| Anthropic | LLM | :white_check_mark: | :white_check_mark: |
244-
| Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
245-
| Cohere | LLM | :white_check_mark: | :white_check_mark: |
246-
| Groq | LLM | :x: | :white_check_mark: |
247-
| Perplexity | LLM | :white_check_mark: | :white_check_mark: |
248-
| Gemini | LLM | :x: | :white_check_mark: |
249-
| Mistral | LLM | :x: | :white_check_mark: |
250-
| Langchain | Framework | :x: | :white_check_mark: |
251-
| LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
252-
| Langgraph | Framework | :x: | :white_check_mark: |
253-
| DSPy | Framework | :x: | :white_check_mark: |
254-
| CrewAI | Framework | :x: | :white_check_mark: |
255-
| Ollama | Framework | :x: | :white_check_mark: |
256-
| VertexAI | Framework | :x: | :white_check_mark: |
257-
| Vercel AI SDK| Framework | :white_check_mark: | :x: |
258-
| EmbedChain | Framework | :x: | :white_check_mark: |
259-
| Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
260-
| ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
261-
| QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
262-
| Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
263-
| PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
245+
| Vendor | Type | Typescript SDK | Python SDK |
246+
| ------------- | --------------- | ------------------ | ------------------------------- |
247+
| OpenAI | LLM | :white_check_mark: | :white_check_mark: |
248+
| Anthropic | LLM | :white_check_mark: | :white_check_mark: |
249+
| Azure OpenAI | LLM | :white_check_mark: | :white_check_mark: |
250+
| Cohere | LLM | :white_check_mark: | :white_check_mark: |
251+
| Groq | LLM | :x: | :white_check_mark: |
252+
| Perplexity | LLM | :white_check_mark: | :white_check_mark: |
253+
| Gemini | LLM | :x: | :white_check_mark: |
254+
| Mistral | LLM | :x: | :white_check_mark: |
255+
| Langchain | Framework | :x: | :white_check_mark: |
256+
| LlamaIndex | Framework | :white_check_mark: | :white_check_mark: |
257+
| Langgraph | Framework | :x: | :white_check_mark: |
258+
| DSPy | Framework | :x: | :white_check_mark: |
259+
| CrewAI | Framework | :x: | :white_check_mark: |
260+
| Ollama | Framework | :x: | :white_check_mark: |
261+
| VertexAI | Framework | :x: | :white_check_mark: |
262+
| Vercel AI SDK | Framework | :white_check_mark: | :x: |
263+
| EmbedChain | Framework | :x: | :white_check_mark: |
264+
| Autogen | Framework | :x: | :white_check_mark: |
265+
| Pinecone | Vector Database | :white_check_mark: | :white_check_mark: |
266+
| ChromaDB | Vector Database | :white_check_mark: | :white_check_mark: |
267+
| QDrant | Vector Database | :white_check_mark: | :white_check_mark: |
268+
| Weaviate | Vector Database | :white_check_mark: | :white_check_mark: |
269+
| PGVector | Vector Database | :white_check_mark: | :white_check_mark: (SQLAlchemy) |
264270

265271
---
266272

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ dependencies = [
3030
'sqlalchemy',
3131
'fsspec>=2024.6.0',
3232
"transformers>=4.11.3",
33+
"sentry-sdk>=2.14.0",
3334
]
3435

3536
requires-python = ">=3.9"
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
from .main import main as autogen_main
2+
from .main import comedy_show
3+
4+
5+
class AutoGenRunner:
6+
def run(self):
7+
# autogen_main()
8+
comedy_show()
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
from langtrace_python_sdk import langtrace
2+
from autogen import ConversableAgent
3+
from dotenv import load_dotenv
4+
from autogen.coding import LocalCommandLineCodeExecutor
5+
import tempfile
6+
7+
8+
load_dotenv()
9+
langtrace.init(write_spans_to_console=False)
10+
# agentops.init(api_key=os.getenv("AGENTOPS_API_KEY"))
11+
# Create a temporary directory to store the code files.
12+
temp_dir = tempfile.TemporaryDirectory()
13+
14+
15+
# Create a local command line code executor.
16+
executor = LocalCommandLineCodeExecutor(
17+
timeout=10, # Timeout for each code execution in seconds.
18+
work_dir=temp_dir.name, # Use the temporary directory to store the code files.
19+
)
20+
21+
22+
def main():
23+
24+
agent = ConversableAgent(
25+
"chatbot",
26+
llm_config={"config_list": [{"model": "gpt-4"}], "cache_seed": None},
27+
code_execution_config=False, # Turn off code execution, by default it is off.
28+
function_map=None, # No registered functions, by default it is None.
29+
human_input_mode="NEVER", # Never ask for human input.
30+
)
31+
32+
reply = agent.generate_reply(
33+
messages=[{"content": "Tell me a joke.", "role": "user"}]
34+
)
35+
return reply
36+
37+
38+
def comedy_show():
39+
cathy = ConversableAgent(
40+
name="cathy",
41+
system_message="Your name is Cathy and you are a part of a duo of comedians.",
42+
llm_config={
43+
"config_list": [{"model": "gpt-4o-mini", "temperature": 0.9}],
44+
"cache_seed": None,
45+
},
46+
description="Cathy is a comedian",
47+
max_consecutive_auto_reply=10,
48+
code_execution_config={
49+
"executor": executor
50+
}, # Use the local command line code executor.
51+
function_map=None,
52+
chat_messages=None,
53+
silent=True,
54+
default_auto_reply="Sorry, I don't know what to say.",
55+
human_input_mode="NEVER", # Never ask for human input.
56+
)
57+
58+
joe = ConversableAgent(
59+
"joe",
60+
system_message="Your name is Joe and you are a part of a duo of comedians.",
61+
llm_config={
62+
"config_list": [{"model": "gpt-4o-mini", "temperature": 0.7}],
63+
"cache_seed": None,
64+
},
65+
human_input_mode="NEVER", # Never ask for human input.
66+
)
67+
68+
result = joe.initiate_chat(
69+
recipient=cathy, message="Cathy, tell me a joke.", max_turns=2
70+
)
71+
72+
return result

src/examples/langchain_example/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
from examples.langchain_example.langchain_google_genai import basic_google_genai
12
from .basic import basic_app, rag, load_and_split
23
from langtrace_python_sdk import with_langtrace_root_span
34

@@ -12,6 +13,7 @@ def run(self):
1213
rag()
1314
load_and_split()
1415
basic_graph_tools()
16+
basic_google_genai()
1517

1618

1719
class GroqRunner:
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
from langchain_core.messages import HumanMessage
2+
from langchain_google_genai import ChatGoogleGenerativeAI
3+
from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span
4+
from dotenv import find_dotenv, load_dotenv
5+
from langtrace_python_sdk import langtrace
6+
7+
_ = load_dotenv(find_dotenv())
8+
9+
langtrace.init()
10+
11+
@with_langtrace_root_span("basic_google_genai")
12+
def basic_google_genai():
13+
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
14+
# example
15+
message = HumanMessage(
16+
content=[
17+
{
18+
"type": "text",
19+
"text": "What's in this image?",
20+
},
21+
]
22+
)
23+
message_image = HumanMessage(content="https://picsum.photos/seed/picsum/200/300")
24+
25+
res = llm.invoke([message, message_image])
26+
# print(res)
27+
28+
29+
basic_google_genai()
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
LANGTRACE_SDK_NAME = "langtrace-python-sdk"
2+
SENTRY_DSN = "https://[email protected]/4507929133056000"

src/langtrace_python_sdk/constants/instrumentation/common.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@
3131
"GEMINI": "Gemini",
3232
"MISTRAL": "Mistral",
3333
"EMBEDCHAIN": "Embedchain",
34+
"AUTOGEN": "Autogen",
3435
}
3536

3637
LANGTRACE_ADDITIONAL_SPAN_ATTRIBUTES_KEY = "langtrace_additional_attributes"

src/langtrace_python_sdk/instrumentation/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
from .weaviate import WeaviateInstrumentation
1515
from .ollama import OllamaInstrumentor
1616
from .dspy import DspyInstrumentation
17+
from .autogen import AutogenInstrumentation
1718
from .vertexai import VertexAIInstrumentation
1819
from .gemini import GeminiInstrumentation
1920
from .mistral import MistralInstrumentation
@@ -37,6 +38,7 @@
3738
"WeaviateInstrumentation",
3839
"OllamaInstrumentor",
3940
"DspyInstrumentation",
41+
"AutogenInstrumentation",
4042
"VertexAIInstrumentation",
4143
"GeminiInstrumentation",
4244
"MistralInstrumentation",
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from .instrumentation import AutogenInstrumentation
2+
3+
__all__ = ["AutogenInstrumentation"]

0 commit comments

Comments
 (0)