Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions samples/travel planner agent/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Python
.venv/
__pycache__/
*.pyc

# Node
node_modules/

# Env
.env

# OS
.DS_Store

# Booking API
backend/booking_api/data/bookings.json
88 changes: 88 additions & 0 deletions samples/travel planner agent/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Hotel Booking Agent

Minimal Python + React stack for the travel planner agent.

- **AI Agent**: `backend/agent/`
- **Booking API**: `backend/booking_api/`
- **Frontend**: `frontend/`
- **Policy ingest**: `resources/ingest/`
- **Sample policy PDFs**: `resources/policy_pdfs/`

## Quick Start

### Agent Manager deployment
Deploy the agent in your Agent Manager environment (details to be added). The flow below covers the required supporting services:

**Agent Manager**
- Repo URL: `https://github.com/wso2/agent-manager/tree/amp/v0/samples/travel_planner_agent`
- Language/runtime: Python 3.11
- Run command: `uvicorn app:app --host 0.0.0.0 --port 9090`
- Agent type: Chat API Agent
- Schema path: `openapi.yaml`
- Port: `9090`

**Agent environment variables**
Required:
- `OPENAI_API_KEY`
- `ASGARDEO_BASE_URL`
- `ASGARDEO_CLIENT_ID`
- `PINECONE_API_KEY`
- `PINECONE_SERVICE_URL`

Optional (defaults are applied if unset):
- `OPENAI_MODEL` (default: `gpt-4o-mini`)
- `OPENAI_EMBEDDING_MODEL` (default: `text-embedding-3-small`)
- `WEATHER_API_KEY`
- `WEATHER_API_BASE_URL` (default: `http://api.weatherapi.com/v1`)
- `BOOKING_API_BASE_URL` (default: `http://localhost:9091`)

**Expose the agent endpoint after deploy**
Run this inside the WSO2-AMP dev container to expose the agent on `localhost:9090`:

```bash
kubectl -n dp-default-default-default-ccb66d74 port-forward svc/travel-planner-agent-is 9090:80
```

**Booking API**
- Runs locally on `http://localhost:9091` when started via `uvicorn`.
- You can also deploy it to a cloud host; just point the agent configuration at the deployed base URL.

**Pinecone policies (required)**
- Create a Pinecone index using your preferred embedding model.
- Set the Pinecone and embedding configuration in `resources/ingest/.env`.
- Run the ingest to populate the index.

### Local services (Booking API + Frontend)
#### 1) Start the booking API (local)
```bash
cd backend/booking_api
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn booking_api:app --host 0.0.0.0 --port 9091
```

#### 2) Start the frontend (local)
The frontend runs on `http://localhost:3000`, then:

```bash
cd frontend
npm install
npm start
```

## Seed Pinecone policies (required)
Populate Pinecone from the sample policies in `resources/policy_pdfs`.
Make sure you have created a Pinecone index with your preferred embedding model and set these values in `resources/ingest/.env`:
`PINECONE_SERVICE_URL`, `PINECONE_API_KEY`, `PINECONE_INDEX_NAME`, `OPENAI_API_KEY`, `OPENAI_EMBEDDING_MODEL`, and optional chunk settings.

```bash
cd resources/ingest
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python ingest.py
```

## Notes
- The agent serves chat at `http://localhost:9090/chat`.
14 changes: 14 additions & 0 deletions samples/travel planner agent/backend/agent/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
OPENAI_API_KEY=
OPENAI_MODEL=

BOOKING_API_BASE_URL=

ASGARDEO_BASE_URL=
ASGARDEO_CLIENT_ID=

PINECONE_API_KEY=
PINECONE_SERVICE_URL=
PINECONE_INDEX_NAME=

WEATHER_API_KEY=
WEATHER_API_BASE_URL=
126 changes: 126 additions & 0 deletions samples/travel planner agent/backend/agent/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
from __future__ import annotations

from datetime import datetime, timezone
import logging

from fastapi import FastAPI, HTTPException, Request, status
from fastapi.middleware.cors import CORSMiddleware
from langchain_core.messages import HumanMessage
import jwt
from pydantic import BaseModel
from typing import Any

from config import Settings
from graph import build_graph

logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
)

configs = Settings.from_env()
agent_graph = build_graph(configs)

class ChatRequest(BaseModel):
message: str
sessionId: str | None = None


class ChatResponse(BaseModel):
message: str

app = FastAPI(title="Hotel Booking Agent")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=False,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["Content-Type", "Authorization", "Accept", "x-user-id"],
max_age=84900,
)


def _wrap_user_message(user_message: str, user_id: str, user_name: str | None) -> str:
now = datetime.now(timezone.utc).isoformat()
resolved_user_id = user_id
resolved_user_name = user_name or "Traveler"
return (
f"User Name: {resolved_user_name}\n"
f"User Context (non-hotel identifiers): {resolved_user_name} ({resolved_user_id})\n"
f"UTC Time now:\n{now}\n\n"
f"User Query:\n{user_message}"
)


def _get_bearer_token(request: Request) -> str:
auth_header = request.headers.get("authorization", "")
if not auth_header.lower().startswith("bearer "):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing or invalid Authorization header.",
)
token = auth_header.split(" ", 1)[1].strip()
if not token:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Missing bearer token.",
)
return token


def _decode_access_token(token: str) -> dict[str, Any]:
return jwt.decode(
token,
options={
"verify_signature": False,
"verify_aud": False,
"verify_iss": False,
"verify_exp": False,
},
)


def _extract_user_from_token(request: Request) -> tuple[str, str | None]:
token = _get_bearer_token(request)
try:
claims = _decode_access_token(token)
except jwt.PyJWTError:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Invalid access token.",
)
user_id = claims.get("sub")
if not user_id:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Access token missing subject.",
)
user_name = (
claims.get("preferred_username")
or claims.get("given_name")
or claims.get("name")
or claims.get("email")
)
return user_id, user_name


@app.post("/chat", response_model=ChatResponse)
def chat(request: ChatRequest, http_request: Request) -> ChatResponse:
session_id = request.sessionId
user_id, user_name = _extract_user_from_token(http_request)
wrapped_message = _wrap_user_message(
request.message,
user_id,
user_name,
)
thread_id = f"{user_id}:{session_id}"
result = agent_graph.invoke(
{"messages": [HumanMessage(content=wrapped_message)]},
config={
"recursion_limit": 50,
"configurable": {"thread_id": thread_id},
},
)

last_message = result["messages"][-1]
return ChatResponse(message=last_message.content)
56 changes: 56 additions & 0 deletions samples/travel planner agent/backend/agent/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import os
from dataclasses import dataclass
from dotenv import load_dotenv

load_dotenv()

def _split_csv(value: str | None, default: list[str]) -> list[str]:
if value is None:
return default
stripped = [item.strip() for item in value.split(",")]
return [item for item in stripped if item]


@dataclass
class Settings:
openai_api_key: str
openai_model: str
openai_embedding_model: str
asgardeo_base_url: str
asgardeo_client_id: str
pinecone_api_key: str
pinecone_service_url: str
pinecone_index_name: str
weather_api_key: str | None
weather_api_base_url: str
booking_api_base_url: str
cors_allow_origins: list[str]
cors_allow_credentials: bool

@classmethod
def from_env(cls) -> "Settings":
def required(name: str) -> str:
value = os.getenv(name)
if not value:
raise ValueError(f"Missing required env var: {name}")
return value
asgardeo_base_url = required("ASGARDEO_BASE_URL")
asgardeo_client_id = required("ASGARDEO_CLIENT_ID")
return cls(
openai_api_key=required("OPENAI_API_KEY"),
openai_model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
openai_embedding_model=os.getenv("OPENAI_EMBEDDING_MODEL", "text-embedding-3-small"),
asgardeo_base_url=asgardeo_base_url,
asgardeo_client_id=asgardeo_client_id,
pinecone_api_key=required("PINECONE_API_KEY"),
pinecone_service_url=required("PINECONE_SERVICE_URL"),
pinecone_index_name=os.getenv("PINECONE_INDEX_NAME", "hotel-policies"),
weather_api_key=os.getenv("WEATHER_API_KEY"),
weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "http://api.weatherapi.com/v1"),
booking_api_base_url=os.getenv("BOOKING_API_BASE_URL", "http://localhost:9091"),
cors_allow_origins=_split_csv(
os.getenv("CORS_ALLOW_ORIGINS"),
["http://localhost:3001"],
),
cors_allow_credentials=os.getenv("CORS_ALLOW_CREDENTIALS", "true").lower() == "true",
)
80 changes: 80 additions & 0 deletions samples/travel planner agent/backend/agent/graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
from __future__ import annotations

import logging
from typing import Annotated, TypedDict

from langchain_core.messages import BaseMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver


from config import Settings
from tools import build_tools

logger = logging.getLogger(__name__)

SYSTEM_PROMPT = """You are an assistant for planning trip itineraries of a hotel listing company.
Help users plan their perfect trip, considering preferences and available hotels.

Instructions:
- Match hotels near attractions with user interests when prioritizing hotels.
- You may plan itineraries with multiple hotels based on user interests and attractions.
- Include the hotel and things to do for each day in the itinerary.
- Use markdown formatting in non-hotel-search answers. Include hotel photos if available.
- Always call get_user_profile_tool first to retrieve personalization data.
- If the user explicitly asks to book, call create_booking_tool using available hotel/room data.
- When calling create_booking_tool, include pricePerNight for each room from availability results.
- If the user asks to edit or modify a booking, call edit_booking_tool with the bookingId.
- If the user asks to cancel a booking, call cancel_booking_tool with the bookingId.
- If the user asks to list or view bookings, call list_bookings_tool with the userId from context. Filter by status when asked (available/my bookings => CONFIRMED, cancelled => CANCELLED, all => ALL).
- If booking details are missing (hotelId, roomId, dates, guests, or primary guest contact info), ask a concise follow-up question instead of making up data. Use bullet points for the missing fields and list available room options as bullets when asking for a room selection.
- Do not claim a booking failed unless the booking tool returns an error.
- If a booking attempt fails, ask a concise follow-up to retry with corrected details or an alternative hotel.
- After a successful booking tool response, provide the final user response and do not call more tools.
- When listing past bookings, use hotelName when available; otherwise fall back to hotelId.
- For hotel policy questions, always call query_hotel_policy_tool with the hotel name or id.
- Do not answer policy questions from hotel search/details responses or dataset fields.
- Use resolve_relative_dates_tool to resolve phrases like tomorrow, this weekend, next Friday into ISO dates. If ambiguity remains, ask a clarifying question and do not guess.
- For availability responses, format each room with: Room Type, Price per night, Max Occupancy.
- Prefer this discovery flow for hotels: call search_hotels_tool even if dates are missing, rank/summarize, ask for dates if missing.
- When the user asks about a specific hotel, resolve hotelId then call get_hotel_info_tool.
- For hotel search results or single-hotel details, return only HOTEL_RESULTS_JSON followed by valid JSON.
- Do not output raw tool traces, internal reasoning, markdown headings, or code fences."""


class AgentState(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]


def build_graph(configs: Settings):
tools = build_tools(configs)
llm = ChatOpenAI(
model=configs.openai_model,
api_key=configs.openai_api_key,
).bind_tools(tools)

def agent_node(state: AgentState) -> AgentState:
messages = [SystemMessage(content=SYSTEM_PROMPT)] + state["messages"]
response = llm.invoke(messages)
tool_calls = getattr(response, "tool_calls", None) or []
if tool_calls:
tool_names = [call.get("name") for call in tool_calls if isinstance(call, dict)]
logger.debug("agent_node decided to call tools: %s", tool_names)
else:
logger.debug("agent_node returned a final response (no tool calls).")
return {"messages": [response]}

graph = StateGraph(AgentState) #add in memory server
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode(tools))

# Remove the mapping - tools_condition returns "tools" or END automatically
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")
graph.set_entry_point("agent")

checkpointer = InMemorySaver()
return graph.compile(checkpointer=checkpointer)
Loading