Skip to content

Commit 96cc1c3

Browse files
committed
Merge branch 'main' into feat/sso-oidc
2 parents f07148c + 6d9b9be commit 96cc1c3

File tree

59 files changed

+2174
-1287
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+2174
-1287
lines changed

api/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "api"
3-
version = "0.74.0"
3+
version = "0.76.0"
44
description = "Agenta API"
55
authors = [
66
{ name = "Mahmoud Mabrouk", email = "[email protected]" },

docs/docs/evaluation/configure-evaluators/07-custom-evaluator.mdx

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,7 @@
22
title: "Custom Code Evaluator"
33
---
44

5-
Sometimes, the default evaluators in **Agenta** may not be sufficient for your specific use case. In such cases, you can create a custom evaluator to suit your specific needs. Custom evaluators are written in Python.
6-
7-
:::info
8-
For the moment, there are limitation on the code that can be written in the custom evaluator. Our backend uses `RestrictedPython` to execute the code which limits the libraries that can be used.
9-
:::
5+
Sometimes, the default evaluators in **Agenta** may not be sufficient for your specific use case. In such cases, you can create a custom evaluator to suit your specific needs. Custom evaluators are written in Python, JavaScript, or TypeScript.
106

117
## Evaluation code
128

docs/docs/observability/integrations/02-langchain.mdx

Lines changed: 36 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ This guide shows you how to instrument LangChain applications using Agenta's obs
1919
Install the required packages:
2020

2121
```bash
22-
pip install -U agenta openai opentelemetry-instrumentation-langchain langchain langchain_community
22+
pip install -U agenta openai opentelemetry-instrumentation-langchain langchain langchain-openai
2323
```
2424

2525
## Configure Environment Variables
@@ -48,15 +48,17 @@ os.environ["AGENTA_HOST"] = "http://localhost"
4848

4949
## Code Example
5050

51+
This example uses [LangChain Expression Language (LCEL)](https://python.langchain.com/docs/concepts/lcel/) to build a multi-step workflow that generates a joke and then translates it.
52+
5153
```python
5254
# highlight-next-line
5355
import agenta as ag
5456
# highlight-next-line
5557
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
56-
from langchain.schema import SystemMessage, HumanMessage
57-
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
58-
from langchain_community.chat_models import ChatOpenAI
59-
from langchain.chains import LLMChain, SequentialChain, TransformChain
58+
from langchain_core.prompts import ChatPromptTemplate
59+
from langchain_core.output_parsers import StrOutputParser
60+
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
61+
from langchain_openai import ChatOpenAI
6062

6163
# highlight-next-line
6264
ag.init()
@@ -66,43 +68,39 @@ LangchainInstrumentor().instrument()
6668

6769
def langchain_app():
6870
# Initialize the chat model
69-
chat = ChatOpenAI(temperature=0)
70-
71-
# Define a transformation chain to create the prompt
72-
transform = TransformChain(
73-
input_variables=["subject"],
74-
output_variables=["prompt"],
75-
transform=lambda inputs: {"prompt": f"Tell me a joke about {inputs['subject']}."},
76-
)
77-
78-
# Define the first LLM chain to generate a joke
79-
first_prompt_messages = [
80-
SystemMessage(content="You are a funny sarcastic nerd."),
81-
HumanMessage(content="{prompt}"),
82-
]
83-
first_prompt_template = ChatPromptTemplate.from_messages(first_prompt_messages)
84-
first_chain = LLMChain(llm=chat, prompt=first_prompt_template, output_key="joke")
85-
86-
# Define the second LLM chain to translate the joke
87-
second_prompt_messages = [
88-
SystemMessage(content="You are an Elf."),
89-
HumanMessagePromptTemplate.from_template(
90-
"Translate the joke below into Sindarin language:\n{joke}"
91-
),
92-
]
93-
second_prompt_template = ChatPromptTemplate.from_messages(second_prompt_messages)
94-
second_chain = LLMChain(llm=chat, prompt=second_prompt_template)
95-
96-
# Chain everything together in a sequential workflow
97-
workflow = SequentialChain(
98-
chains=[transform, first_chain, second_chain],
99-
input_variables=["subject"],
71+
llm = ChatOpenAI(temperature=0)
72+
73+
# Create prompt for joke generation
74+
joke_prompt = ChatPromptTemplate.from_messages([
75+
("system", "You are a funny sarcastic nerd."),
76+
("human", "Tell me a joke about {subject}."),
77+
])
78+
79+
# Create prompt for translation
80+
translate_prompt = ChatPromptTemplate.from_messages([
81+
("system", "You are an Elf."),
82+
("human", "Translate the joke below into Sindarin language:\n{joke}"),
83+
])
84+
85+
# Build the chain using LCEL (LangChain Expression Language)
86+
# First chain: generate a joke
87+
joke_chain = joke_prompt | llm | StrOutputParser()
88+
89+
# Second chain: translate the joke
90+
translate_chain = translate_prompt | llm | StrOutputParser()
91+
92+
# Combine the chains: generate joke, then translate it
93+
full_chain = (
94+
{"subject": RunnablePassthrough()}
95+
| RunnableLambda(lambda x: {"joke": joke_chain.invoke(x["subject"])})
96+
| translate_chain
10097
)
10198

10299
# Execute the workflow and print the result
103-
result = workflow({"subject": "OpenTelemetry"})
100+
result = full_chain.invoke("OpenTelemetry")
104101
print(result)
105102

103+
106104
# Run the LangChain application
107105
langchain_app()
108106
```
@@ -111,6 +109,7 @@ langchain_app()
111109

112110
- **Initialize Agenta**: `ag.init()` sets up the Agenta SDK.
113111
- **Instrument LangChain**: `LangchainInstrumentor().instrument()` instruments LangChain for tracing. This must be called **before** running your application to ensure all components are traced.
112+
- **LCEL Chains**: The pipe operator (`|`) chains components together. Each step's output becomes the next step's input, making it easy to compose complex workflows.
114113

115114
## Using Workflows
116115

docs/docs/tutorials/cookbooks/02-observability_langchain.mdx

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,9 +73,7 @@ This Langchain RAG application:
7373
```python
7474
from langchain_openai import ChatOpenAI
7575

76-
7776
import bs4
78-
from langchain import hub
7977
from langchain_chroma import Chroma
8078
from langchain_community.document_loaders import WebBaseLoader
8179
from langchain_core.output_parsers import StrOutputParser

examples/jupyter/observability/observability_langchain.ipynb

Lines changed: 3 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -135,78 +135,8 @@
135135
"cell_type": "code",
136136
"execution_count": null,
137137
"metadata": {},
138-
"outputs": [
139-
{
140-
"data": {
141-
"text/plain": [
142-
"'To save a new version of a prompt in Agenta, you need to create a variant, which acts like a branch in git for versioning. After making your changes, commit them to the variant. Finally, you can deploy the specific version of your variant to the desired environment.'"
143-
]
144-
},
145-
"execution_count": 16,
146-
"metadata": {},
147-
"output_type": "execute_result"
148-
}
149-
],
150-
"source": [
151-
"from langchain_openai import ChatOpenAI\n",
152-
"\n",
153-
"\n",
154-
"import bs4\n",
155-
"from langchain import hub\n",
156-
"from langchain_chroma import Chroma\n",
157-
"from langchain_community.document_loaders import WebBaseLoader\n",
158-
"from langchain_core.output_parsers import StrOutputParser\n",
159-
"from langchain_core.runnables import RunnablePassthrough\n",
160-
"from langchain_openai import OpenAIEmbeddings\n",
161-
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
162-
"from langchain_core.prompts import ChatPromptTemplate\n",
163-
"\n",
164-
"prompt = \"\"\"\n",
165-
"You are an assistant for question-answering tasks.\n",
166-
"Use the following pieces of retrieved context to answer the question.\n",
167-
"If you don't know the answer, just say that you don't know.\n",
168-
"Use three sentences maximum and keep the answer concise and to the point.\n",
169-
"\n",
170-
"Question: {question} \n",
171-
"\n",
172-
"Context: {context} \n",
173-
"\n",
174-
"Answer:\n",
175-
"\"\"\"\n",
176-
"\n",
177-
"prompt_template = ChatPromptTemplate(\n",
178-
" [\n",
179-
" (\"human\", prompt),\n",
180-
" ]\n",
181-
")\n",
182-
"\n",
183-
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
184-
"\n",
185-
"loader = WebBaseLoader(\n",
186-
" web_paths=(\n",
187-
" \"https://agenta.ai/docs/prompt-engineering/managing-prompts-programatically/create-and-commit\",\n",
188-
" ),\n",
189-
" bs_kwargs=dict(parse_only=bs4.SoupStrainer(\"article\")), # Only parse the core\n",
190-
")\n",
191-
"docs = loader.load()\n",
192-
"\n",
193-
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
194-
"splits = text_splitter.split_documents(docs)\n",
195-
"vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n",
196-
"\n",
197-
"# Retrieve and generate using the relevant snippets of the blog.\n",
198-
"retriever = vectorstore.as_retriever()\n",
199-
"\n",
200-
"\n",
201-
"rag_chain = (\n",
202-
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
203-
" | prompt_template\n",
204-
" | llm\n",
205-
" | StrOutputParser()\n",
206-
")\n",
207-
"\n",
208-
"rag_chain.invoke(\"How can I save a new version of a prompt in Agenta?\")"
209-
]
138+
"outputs": [],
139+
"source": "from langchain_openai import ChatOpenAI\n\nimport bs4\nfrom langchain_chroma import Chroma\nfrom langchain_community.document_loaders import WebBaseLoader\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.runnables import RunnablePassthrough\nfrom langchain_openai import OpenAIEmbeddings\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\nfrom langchain_core.prompts import ChatPromptTemplate\n\nprompt = \"\"\"\nYou are an assistant for question-answering tasks.\nUse the following pieces of retrieved context to answer the question.\nIf you don't know the answer, just say that you don't know.\nUse three sentences maximum and keep the answer concise and to the point.\n\nQuestion: {question} \n\nContext: {context} \n\nAnswer:\n\"\"\"\n\nprompt_template = ChatPromptTemplate(\n [\n (\"human\", prompt),\n ]\n)\n\nllm = ChatOpenAI(model=\"gpt-4o-mini\")\n\nloader = WebBaseLoader(\n web_paths=(\n \"https://agenta.ai/docs/prompt-engineering/managing-prompts-programatically/create-and-commit\",\n ),\n bs_kwargs=dict(parse_only=bs4.SoupStrainer(\"article\")), # Only parse the core\n)\ndocs = loader.load()\n\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\nsplits = text_splitter.split_documents(docs)\nvectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n\n# Retrieve and generate using the relevant snippets of the blog.\nretriever = vectorstore.as_retriever()\n\n\nrag_chain = (\n {\"context\": retriever, \"question\": RunnablePassthrough()}\n | prompt_template\n | llm\n | StrOutputParser()\n)\n\nrag_chain.invoke(\"How can I save a new version of a prompt in Agenta?\")"
210140
}
211141
],
212142
"metadata": {
@@ -230,4 +160,4 @@
230160
},
231161
"nbformat": 4,
232162
"nbformat_minor": 2
233-
}
163+
}

sdk/agenta/sdk/workflows/runners/registry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ def get_runner() -> CodeRunner:
1919
Registry to get the appropriate code runner based on environment configuration.
2020
2121
Uses AGENTA_SERVICES_SANDBOX_RUNNER environment variable:
22-
- "local" (default): Uses RestrictedPython for local execution
22+
- "local" (default): Uses current container for local execution
2323
- "daytona": Uses Daytona remote sandbox
2424
2525
Returns:

sdk/agenta/sdk/workflows/templates.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
// Ensure result is a number
4949
result = Number(result);
5050
if (!Number.isFinite(result)) {{
51-
result = 0.0;
51+
result = null;
5252
}}
5353
5454
// Print result for capture
@@ -71,7 +71,7 @@
7171
// Ensure result is a number
7272
result = Number(result);
7373
if (!Number.isFinite(result)) {{
74-
result = 0.0;
74+
result = null;
7575
}}
7676
7777
// Print result for capture

sdk/poetry.lock

Lines changed: 7 additions & 7 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

sdk/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "agenta"
3-
version = "0.74.0"
3+
version = "0.76.0"
44
description = "The SDK for agenta is an open-source LLMOps platform."
55
readme = "README.md"
66
authors = [

web/ee/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "@agenta/ee",
3-
"version": "0.74.0",
3+
"version": "0.76.0",
44
"private": true,
55
"engines": {
66
"node": ">=18"

0 commit comments

Comments
 (0)