Skip to content

Commit 6828315

Browse files
authored
Merge pull request #140 from ks6088ts-labs/feature/issue-135_langgraph-agent
add agents with LangGraph
2 parents 4ce0be4 + 5dee5db commit 6828315

33 files changed

+2204
-18
lines changed

.env.template

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,3 +31,11 @@ AZURE_BLOB_CONTAINER_NAME="audio"
3131
# Azure AI Speech
3232
AZURE_AI_SPEECH_API_ENDPOINT="https://<speech-api-name>.cognitiveservices.azure.com/"
3333
AZURE_AI_SPEECH_API_SUBSCRIPTION_KEY="<speech-api-subscription-key>"
34+
35+
# Bing search resource
36+
BING_SUBSCRIPTION_KEY="<bing-subscription-key>"
37+
BING_SEARCH_URL="https://api.bing.microsoft.com/v7.0/search"
38+
39+
# LangSmith
40+
LANGCHAIN_TRACING_V2="true"
41+
LANGCHAIN_API_KEY="<langchain-api-key>"

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -166,3 +166,4 @@ generated/
166166
*.pt
167167
*.jpg
168168
*.jpeg
169+
.chroma

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ Here are the preferred tools for development.
4141
| [9_streamlit_azure_document_intelligence](./apps/9_streamlit_azure_document_intelligence/README.md) | Call Azure AI Document Intelligence API with Streamlit | ![9_streamlit_azure_document_intelligence](./docs/images/9_streamlit_azure_document_intelligence.main.png) |
4242
| [10_streamlit_batch_transcription](./apps/10_streamlit_batch_transcription/README.md) | Call Batch Transcription API with Streamlit | ![10_streamlit_batch_transcription](./docs/images/10_streamlit_batch_transcription.main.png) |
4343
| [11_promptflow](./apps/11_promptflow/README.md) | Get started with Prompt flow | No Image |
44+
| [12_langgraph_agent](./apps/12_langgraph_agent/README.md) | Create agents with LangGraph | No Image |
4445
| [99_streamlit_examples](./apps/99_streamlit_examples/README.md) | Code samples for Streamlit | ![99_streamlit_examples](./docs/images/99_streamlit_examples.explaindata.png) |
4546

4647
## How to run

apps/12_langgraph_agent/README.md

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# Create agents with LangGraph
2+
3+
This app demonstrates how to implement agents with LangGraph.
4+
5+
## Prerequisites
6+
7+
- Python 3.10 or later
8+
- Azure OpenAI Service
9+
10+
## Overview
11+
12+
**What is [LangGraph](https://langchain-ai.github.io/langgraph/)?**
13+
14+
LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows.
15+
16+
This chapter provides a practical example of how to use LangGraph to create an agent that can interact with users and external tools.
17+
18+
## Usage
19+
20+
1. Get Azure OpenAI Service API key
21+
1. Copy [.env.template](../../.env.template) to `.env` in the same directory
22+
1. Set credentials in `.env`
23+
1. Run main.py
24+
25+
```shell
26+
# Create a virtual environment
27+
$ python -m venv .venv
28+
29+
# Activate the virtual environment
30+
$ source .venv/bin/activate
31+
32+
# Install dependencies
33+
$ pip install -r requirements.txt
34+
```
35+
36+
### Examples
37+
38+
#### [reflection_agent](./reflection_agent/main.py)
39+
40+
#### [react_agent](./react_agent/main.py)
41+
42+
#### [advanced_rag_flows](./advanced_rag_flows/main.py)
43+
44+
```shell
45+
# create vector store
46+
python apps/12_langgraph_agent/advanced_rag_flows/ingestion.py
47+
48+
# run main.py
49+
python apps/12_langgraph_agent/advanced_rag_flows/main.py
50+
```
51+
52+
![Advanced RAG Flows](../../docs/images/12_langgraph_agent_graph.png)
53+
54+
## References
55+
56+
- [LangGraph](https://langchain-ai.github.io/langgraph/)
57+
- [Udemy > LangGraph- Develop LLM powered agents with LangGraph](https://www.udemy.com/course/langgraph)
58+
- [emarco177/langgaph-course](https://github.com/emarco177/langgaph-course)
59+
- [Prompt flow > Tracing](https://microsoft.github.io/promptflow/how-to-guides/tracing/index.html)
60+
- [Reflection Agents](https://blog.langchain.dev/reflection-agents/)
61+
- [LangChain > Reflexion](https://langchain-ai.github.io/langgraph/tutorials/reflexion/reflexion/)
62+
- [LangChain > Bing Search](https://python.langchain.com/docs/integrations/tools/bing_search/)

apps/12_langgraph_agent/advanced_rag_flows/graph/__init__.py

Whitespace-only changes.

apps/12_langgraph_agent/advanced_rag_flows/graph/chains/__init__.py

Whitespace-only changes.
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
from os import getenv
2+
3+
from langchain_core.prompts import ChatPromptTemplate
4+
from langchain_core.runnables import RunnableSequence
5+
from langchain_openai import AzureChatOpenAI
6+
from pydantic import BaseModel, Field
7+
8+
9+
class GradeAnswer(BaseModel):
10+
11+
binary_score: bool = Field(description="Answer addresses the question, 'yes' or 'no'")
12+
13+
14+
llm = AzureChatOpenAI(
15+
temperature=0,
16+
api_key=getenv("AZURE_OPENAI_API_KEY"),
17+
api_version=getenv("AZURE_OPENAI_API_VERSION"),
18+
azure_endpoint=getenv("AZURE_OPENAI_ENDPOINT"),
19+
model=getenv("AZURE_OPENAI_GPT_MODEL"),
20+
)
21+
22+
structured_llm_grader = llm.with_structured_output(GradeAnswer)
23+
24+
system = """You are a grader assessing whether an answer addresses / resolves a question \n
25+
Give a binary score 'yes' or 'no'. Yes' means that the answer resolves the question."""
26+
answer_prompt = ChatPromptTemplate.from_messages(
27+
[
28+
("system", system),
29+
("human", "User question: \n\n {question} \n\n LLM generation: {generation}"),
30+
]
31+
)
32+
33+
answer_grader: RunnableSequence = answer_prompt | structured_llm_grader
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
from os import getenv
2+
3+
from langchain import hub
4+
from langchain_core.output_parsers import StrOutputParser
5+
from langchain_openai import AzureChatOpenAI
6+
7+
llm = AzureChatOpenAI(
8+
temperature=0,
9+
api_key=getenv("AZURE_OPENAI_API_KEY"),
10+
api_version=getenv("AZURE_OPENAI_API_VERSION"),
11+
azure_endpoint=getenv("AZURE_OPENAI_ENDPOINT"),
12+
model=getenv("AZURE_OPENAI_GPT_MODEL"),
13+
)
14+
prompt = hub.pull("rlm/rag-prompt")
15+
16+
generation_chain = prompt | llm | StrOutputParser()
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
from os import getenv
2+
3+
from langchain_core.prompts import ChatPromptTemplate
4+
from langchain_core.runnables import RunnableSequence
5+
from langchain_openai import AzureChatOpenAI
6+
from pydantic import BaseModel, Field
7+
8+
llm = AzureChatOpenAI(
9+
temperature=0,
10+
api_key=getenv("AZURE_OPENAI_API_KEY"),
11+
api_version=getenv("AZURE_OPENAI_API_VERSION"),
12+
azure_endpoint=getenv("AZURE_OPENAI_ENDPOINT"),
13+
model=getenv("AZURE_OPENAI_GPT_MODEL"),
14+
)
15+
16+
17+
class GradeHallucinations(BaseModel):
18+
"""Binary score for hallucination present in generation answer."""
19+
20+
binary_score: bool = Field(description="Answer is grounded in the facts, 'yes' or 'no'")
21+
22+
23+
structured_llm_grader = llm.with_structured_output(GradeHallucinations)
24+
25+
system = """You are a grader assessing whether an LLM generation is grounded in / supported by a set of retrieved facts. \n
26+
Give a binary score 'yes' or 'no'. 'Yes' means that the answer is grounded in / supported by the set of facts.""" # noqa
27+
hallucination_prompt = ChatPromptTemplate.from_messages(
28+
[
29+
("system", system),
30+
("human", "Set of facts: \n\n {documents} \n\n LLM generation: {generation}"),
31+
]
32+
)
33+
34+
hallucination_grader: RunnableSequence = hallucination_prompt | structured_llm_grader
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
from os import getenv
2+
3+
from langchain_core.prompts import ChatPromptTemplate
4+
from langchain_openai import AzureChatOpenAI
5+
from pydantic import BaseModel, Field
6+
7+
llm = AzureChatOpenAI(
8+
temperature=0,
9+
api_key=getenv("AZURE_OPENAI_API_KEY"),
10+
api_version=getenv("AZURE_OPENAI_API_VERSION"),
11+
azure_endpoint=getenv("AZURE_OPENAI_ENDPOINT"),
12+
model=getenv("AZURE_OPENAI_GPT_MODEL"),
13+
)
14+
15+
16+
class GradeDocuments(BaseModel):
17+
"""Binary score for relevance check on retrieved documents."""
18+
19+
binary_score: str = Field(description="Documents are relevant to the question, 'yes' or 'no'")
20+
21+
22+
structured_llm_grader = llm.with_structured_output(GradeDocuments)
23+
24+
system = """You are a grader assessing relevance of a retrieved document to a user question. \n
25+
If the document contains keyword(s) or semantic meaning related to the question, grade it as relevant. \n
26+
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question."""
27+
grade_prompt = ChatPromptTemplate.from_messages(
28+
[
29+
("system", system),
30+
("human", "Retrieved document: \n\n {document} \n\n User question: {question}"),
31+
]
32+
)
33+
34+
retrieval_grader = grade_prompt | structured_llm_grader

0 commit comments

Comments
 (0)