Skip to content

Commit 35d7f02

Browse files
authored
Merge branch 'main' into patch-1
2 parents 26c0b47 + d9c44fe commit 35d7f02

File tree

15 files changed

+148
-69
lines changed

15 files changed

+148
-69
lines changed

README.md

Lines changed: 17 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent is designed to perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search, reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and Google's Gemini models.
44

5-
![Gemini Fullstack LangGraph](./app.png)
5+
<img src="./app.png" title="Gemini Fullstack LangGraph" alt="Gemini Fullstack LangGraph" width="90%">
66

77
## Features
88

@@ -12,7 +12,7 @@ This project demonstrates a fullstack application using a React frontend and a L
1212
- 🌐 Integrated web research via Google Search API.
1313
- 🤔 Reflective reasoning to identify knowledge gaps and refine searches.
1414
- 📄 Generates answers with citations from gathered sources.
15-
- 🔄 Hot-reloading for both frontend and backend development.
15+
- 🔄 Hot-reloading for both frontend and backend during development.
1616

1717
## Project Structure
1818

@@ -28,7 +28,7 @@ Follow these steps to get the application running locally for development and te
2828
**1. Prerequisites:**
2929

3030
- Node.js and npm (or yarn/pnpm)
31-
- Python 3.8+
31+
- Python 3.11+
3232
- **`GEMINI_API_KEY`**: The backend agent requires a Google Gemini API key.
3333
1. Navigate to the `backend/` directory.
3434
2. Create a file named `.env` by copying the `backend/.env.example` file.
@@ -65,21 +65,33 @@ _Alternatively, you can run the backend and frontend development servers separat
6565

6666
The core of the backend is a LangGraph agent defined in `backend/src/agent/graph.py`. It follows these steps:
6767

68-
![Agent Flow](./agent.png)
68+
<img src="./agent.png" title="Agent Flow" alt="Agent Flow" width="50%">
6969

7070
1. **Generate Initial Queries:** Based on your input, it generates a set of initial search queries using a Gemini model.
7171
2. **Web Research:** For each query, it uses the Gemini model with the Google Search API to find relevant web pages.
7272
3. **Reflection & Knowledge Gap Analysis:** The agent analyzes the search results to determine if the information is sufficient or if there are knowledge gaps. It uses a Gemini model for this reflection process.
7373
4. **Iterative Refinement:** If gaps are found or the information is insufficient, it generates follow-up queries and repeats the web research and reflection steps (up to a configured maximum number of loops).
7474
5. **Finalize Answer:** Once the research is deemed sufficient, the agent synthesizes the gathered information into a coherent answer, including citations from the web sources, using a Gemini model.
7575

76+
## CLI Example
77+
78+
For quick one-off questions you can execute the agent from the command line. The
79+
script `backend/examples/cli_research.py` runs the LangGraph agent and prints the
80+
final answer:
81+
82+
```bash
83+
cd backend
84+
python examples/cli_research.py "What are the latest trends in renewable energy?"
85+
```
86+
87+
7688
## Deployment
7789

7890
In production, the backend server serves the optimized static frontend build. LangGraph requires a Redis instance and a Postgres database. Redis is used as a pub-sub broker to enable streaming real time output from background runs. Postgres is used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics. For more details on how to deploy the backend server, take a look at the [LangGraph Documentation](https://langchain-ai.github.io/langgraph/concepts/deployment_options/). Below is an example of how to build a Docker image that includes the optimized frontend build and the backend server and run it via `docker-compose`.
7991

8092
_Note: For the docker-compose.yml example you need a LangSmith API key, you can get one from [LangSmith](https://smith.langchain.com/settings)._
8193

82-
_Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you update the `apiUrl` in the `frontend/src/App.tsx` file your host. Currently the `apiUrl` is set to `http://localhost:8123` for docker-compose or `http://localhost:2024` for development._
94+
_Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you should update the `apiUrl` in the `frontend/src/App.tsx` file to your host. Currently the `apiUrl` is set to `http://localhost:8123` for docker-compose or `http://localhost:2024` for development._
8395

8496
**1. Build the Docker Image:**
8597

backend/examples/cli_research.py

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
import argparse
2+
from langchain_core.messages import HumanMessage
3+
from agent.graph import graph
4+
5+
6+
def main() -> None:
7+
"""Run the research agent from the command line."""
8+
parser = argparse.ArgumentParser(description="Run the LangGraph research agent")
9+
parser.add_argument("question", help="Research question")
10+
parser.add_argument(
11+
"--initial-queries",
12+
type=int,
13+
default=3,
14+
help="Number of initial search queries",
15+
)
16+
parser.add_argument(
17+
"--max-loops",
18+
type=int,
19+
default=2,
20+
help="Maximum number of research loops",
21+
)
22+
parser.add_argument(
23+
"--reasoning-model",
24+
default="gemini-2.5-pro-preview-05-06",
25+
help="Model for the final answer",
26+
)
27+
args = parser.parse_args()
28+
29+
state = {
30+
"messages": [HumanMessage(content=args.question)],
31+
"initial_search_query_count": args.initial_queries,
32+
"max_research_loops": args.max_loops,
33+
"reasoning_model": args.reasoning_model,
34+
}
35+
36+
result = graph.invoke(state)
37+
messages = result.get("messages", [])
38+
if messages:
39+
print(messages[-1].content)
40+
41+
42+
if __name__ == "__main__":
43+
main()

backend/src/agent/app.py

Lines changed: 2 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
# mypy: disable - error - code = "no-untyped-def,misc"
22
import pathlib
3-
from fastapi import FastAPI, Request, Response
3+
from fastapi import FastAPI, Response
44
from fastapi.staticfiles import StaticFiles
5-
import fastapi.exceptions
65

76
# Define the FastAPI app
87
app = FastAPI()
@@ -18,7 +17,6 @@ def create_frontend_router(build_dir="../frontend/dist"):
1817
A Starlette application serving the frontend.
1918
"""
2019
build_path = pathlib.Path(__file__).parent.parent.parent / build_dir
21-
static_files_path = build_path / "assets" # Vite uses 'assets' subdir
2220

2321
if not build_path.is_dir() or not (build_path / "index.html").is_file():
2422
print(
@@ -36,21 +34,7 @@ async def dummy_frontend(request):
3634

3735
return Route("/{path:path}", endpoint=dummy_frontend)
3836

39-
build_dir = pathlib.Path(build_dir)
40-
41-
react = FastAPI(openapi_url="")
42-
react.mount(
43-
"/assets", StaticFiles(directory=static_files_path), name="static_assets"
44-
)
45-
46-
@react.get("/{path:path}")
47-
async def handle_catch_all(request: Request, path: str):
48-
fp = build_path / path
49-
if not fp.exists() or not fp.is_file():
50-
fp = build_path / "index.html"
51-
return fastapi.responses.FileResponse(fp)
52-
53-
return react
37+
return StaticFiles(directory=build_path, html=True)
5438

5539

5640
# Mount the frontend under /app to not conflict with the LangGraph API routes

backend/src/agent/configuration.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,14 @@ class Configuration(BaseModel):
1616
)
1717

1818
reflection_model: str = Field(
19-
default="gemini-2.5-flash-preview-04-17",
19+
default="gemini-2.5-flash",
2020
metadata={
2121
"description": "The name of the language model to use for the agent's reflection."
2222
},
2323
)
2424

2525
answer_model: str = Field(
26-
default="gemini-2.5-pro-preview-05-06",
26+
default="gemini-2.5-pro",
2727
metadata={
2828
"description": "The name of the language model to use for the agent's answer."
2929
},

backend/src/agent/graph.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -42,17 +42,17 @@
4242

4343
# Nodes
4444
def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerationState:
45-
"""LangGraph node that generates a search queries based on the User's question.
45+
"""LangGraph node that generates search queries based on the User's question.
4646
47-
Uses Gemini 2.0 Flash to create an optimized search query for web research based on
47+
Uses Gemini 2.0 Flash to create an optimized search queries for web research based on
4848
the User's question.
4949
5050
Args:
5151
state: Current graph state containing the User's question
5252
config: Configuration for the runnable, including LLM provider settings
5353
5454
Returns:
55-
Dictionary with state update, including search_query key containing the generated query
55+
Dictionary with state update, including search_query key containing the generated queries
5656
"""
5757
configurable = Configuration.from_runnable_config(config)
5858

@@ -78,7 +78,7 @@ def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerati
7878
)
7979
# Generate the search queries
8080
result = structured_llm.invoke(formatted_prompt)
81-
return {"query_list": result.query}
81+
return {"search_query": result.query}
8282

8383

8484
def continue_to_web_research(state: QueryGenerationState):
@@ -88,7 +88,7 @@ def continue_to_web_research(state: QueryGenerationState):
8888
"""
8989
return [
9090
Send("web_research", {"search_query": search_query, "id": int(idx)})
91-
for idx, search_query in enumerate(state["query_list"])
91+
for idx, search_query in enumerate(state["search_query"])
9292
]
9393

9494

@@ -153,7 +153,7 @@ def reflection(state: OverallState, config: RunnableConfig) -> ReflectionState:
153153
configurable = Configuration.from_runnable_config(config)
154154
# Increment the research loop count and get the reasoning model
155155
state["research_loop_count"] = state.get("research_loop_count", 0) + 1
156-
reasoning_model = state.get("reasoning_model") or configurable.reasoning_model
156+
reasoning_model = state.get("reasoning_model", configurable.reflection_model)
157157

158158
# Format the prompt
159159
current_date = get_current_date()
@@ -231,7 +231,7 @@ def finalize_answer(state: OverallState, config: RunnableConfig):
231231
Dictionary with state update, including running_summary key containing the formatted final summary with sources
232232
"""
233233
configurable = Configuration.from_runnable_config(config)
234-
reasoning_model = state.get("reasoning_model") or configurable.reasoning_model
234+
reasoning_model = state.get("reasoning_model") or configurable.answer_model
235235

236236
# Format the prompt
237237
current_date = get_current_date()

backend/src/agent/prompts.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ def get_current_date():
1717
- Query should ensure that the most current information is gathered. The current date is {current_date}.
1818
1919
Format:
20-
- Format your response as a JSON object with ALL three of these exact keys:
20+
- Format your response as a JSON object with ALL two of these exact keys:
2121
- "rationale": Brief explanation of why these queries are relevant
2222
- "query": A list of search queries
2323
@@ -87,7 +87,7 @@ def get_current_date():
8787
- You have access to all the information gathered from the previous steps.
8888
- You have access to the user's question.
8989
- Generate a high-quality answer to the user's question based on the provided summaries and the user's question.
90-
- you MUST include all the citations from the summaries in the answer correctly.
90+
- You MUST include all the citations from the summaries in the answer correctly.
9191
9292
User Context:
9393
- {research_topic}

backend/src/agent/state.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@
88

99

1010
import operator
11-
from dataclasses import dataclass, field
12-
from typing_extensions import Annotated
1311

1412

1513
class OverallState(TypedDict):
@@ -37,7 +35,7 @@ class Query(TypedDict):
3735

3836

3937
class QueryGenerationState(TypedDict):
40-
query_list: list[Query]
38+
search_query: list[Query]
4139

4240

4341
class WebSearchState(TypedDict):

docker-compose.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,15 @@ volumes:
44
services:
55
langgraph-redis:
66
image: docker.io/redis:6
7+
container_name: langgraph-redis
78
healthcheck:
89
test: redis-cli ping
910
interval: 5s
1011
timeout: 1s
1112
retries: 5
1213
langgraph-postgres:
1314
image: docker.io/postgres:16
15+
container_name: langgraph-postgres
1416
ports:
1517
- "5433:5432"
1618
environment:
@@ -27,6 +29,7 @@ services:
2729
interval: 5s
2830
langgraph-api:
2931
image: gemini-fullstack-langgraph
32+
container_name: langgraph-api
3033
ports:
3134
- "8123:8000"
3235
depends_on:

frontend/src/App.tsx

Lines changed: 22 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ import { useState, useEffect, useRef, useCallback } from "react";
44
import { ProcessedEvent } from "@/components/ActivityTimeline";
55
import { WelcomeScreen } from "@/components/WelcomeScreen";
66
import { ChatMessagesView } from "@/components/ChatMessagesView";
7+
import { Button } from "@/components/ui/button";
78

89
export default function App() {
910
const [processedEventsTimeline, setProcessedEventsTimeline] = useState<
@@ -14,7 +15,7 @@ export default function App() {
1415
>({});
1516
const scrollAreaRef = useRef<HTMLDivElement>(null);
1617
const hasFinalizeEventOccurredRef = useRef(false);
17-
18+
const [error, setError] = useState<string | null>(null);
1819
const thread = useStream<{
1920
messages: Message[];
2021
initial_search_query_count: number;
@@ -26,15 +27,12 @@ export default function App() {
2627
: "http://localhost:8123",
2728
assistantId: "agent",
2829
messagesKey: "messages",
29-
onFinish: (event: any) => {
30-
console.log(event);
31-
},
3230
onUpdateEvent: (event: any) => {
3331
let processedEvent: ProcessedEvent | null = null;
3432
if (event.generate_query) {
3533
processedEvent = {
3634
title: "Generating Search Queries",
37-
data: event.generate_query.query_list.join(", "),
35+
data: event.generate_query?.search_query?.join(", ") || "",
3836
};
3937
} else if (event.web_research) {
4038
const sources = event.web_research.sources_gathered || [];
@@ -52,11 +50,7 @@ export default function App() {
5250
} else if (event.reflection) {
5351
processedEvent = {
5452
title: "Reflection",
55-
data: event.reflection.is_sufficient
56-
? "Search successful, generating final answer."
57-
: `Need more information, searching for ${event.reflection.follow_up_queries.join(
58-
", "
59-
)}`,
53+
data: "Analysing Web Research Results",
6054
};
6155
} else if (event.finalize_answer) {
6256
processedEvent = {
@@ -72,6 +66,9 @@ export default function App() {
7266
]);
7367
}
7468
},
69+
onError: (error: any) => {
70+
setError(error.message);
71+
},
7572
});
7673

7774
useEffect(() => {
@@ -154,18 +151,27 @@ export default function App() {
154151

155152
return (
156153
<div className="flex h-screen bg-neutral-800 text-neutral-100 font-sans antialiased">
157-
<main className="flex-1 flex flex-col overflow-hidden max-w-4xl mx-auto w-full">
158-
<div
159-
className={`flex-1 overflow-y-auto ${
160-
thread.messages.length === 0 ? "flex" : ""
161-
}`}
162-
>
154+
<main className="h-full w-full max-w-4xl mx-auto">
163155
{thread.messages.length === 0 ? (
164156
<WelcomeScreen
165157
handleSubmit={handleSubmit}
166158
isLoading={thread.isLoading}
167159
onCancel={handleCancel}
168160
/>
161+
) : error ? (
162+
<div className="flex flex-col items-center justify-center h-full">
163+
<div className="flex flex-col items-center justify-center gap-4">
164+
<h1 className="text-2xl text-red-400 font-bold">Error</h1>
165+
<p className="text-red-400">{JSON.stringify(error)}</p>
166+
167+
<Button
168+
variant="destructive"
169+
onClick={() => window.location.reload()}
170+
>
171+
Retry
172+
</Button>
173+
</div>
174+
</div>
169175
) : (
170176
<ChatMessagesView
171177
messages={thread.messages}
@@ -177,7 +183,6 @@ export default function App() {
177183
historicalActivities={historicalActivities}
178184
/>
179185
)}
180-
</div>
181186
</main>
182187
</div>
183188
);

0 commit comments

Comments
 (0)