Skip to content
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions samples/hotel-booking-agent/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Python
.venv/
__pycache__/
*.pyc

# Node
node_modules/

# Env
.env

# OS
.DS_Store

# Booking API
backend/booking_api/data/bookings.json
82 changes: 82 additions & 0 deletions samples/hotel-booking-agent/README.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need instructions to setup and test this agent on the agent management platform, similar to https://github.com/wso2/ai-agent-management-platform/tree/main/samples/customer-support-agent

Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Hotel Booking Agent

Minimal Python stack for the travel planner agent, plus optional frontend.

- **AI Agent**: `samples/hotel-booking-agent/agent/`
- **Hotel API**: `samples/hotel-booking-agent/services/hotel_api/`
- **Frontend (optional)**: `samples/hotel-booking-agent/frontend/`
- **Policy ingest**: `samples/hotel-booking-agent/services/hotel_api/resources/ingest/`
- **Sample policy PDFs**: `samples/hotel-booking-agent/services/hotel_api/resources/policy_pdfs/`

## Quick Start

### Agent Manager deployment
Deploy the agent in your Agent Manager environment (details to be added). The flow below covers the required supporting services:

**Agent Manager**
- Repo URL: `https://github.com/wso2/agent-manager/tree/amp/v0/samples/travel_planner_agent`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if amp/v0 branch exists and find path variations

# List all branches to check for amp/v0
echo "=== Checking for amp/v0 branch ==="
git branch -a | grep -i amp || echo "No amp branches found"

# Check for references to travel_planner_agent vs "travel planner agent"
echo -e "\n=== Searching for travel planner/agent references in markdown files ==="
rg -i "travel.?planner.?agent" --type md -C2

# Also check the actual file in question
echo -e "\n=== Content of the file in question ==="
cat -n "samples/travel planner agent/README.md" | head -30

# Check if external URL is valid by examining context
echo -e "\n=== Checking git history for the URL ==="
rg "wso2/agent-manager" -C2

Repository: wso2/agent-manager

Length of output: 2791


🏁 Script executed:

#!/bin/bash
# Check for amp/v0 as a tag or other reference
echo "=== Checking for amp/v0 tag ==="
git tag | grep -i amp || echo "No amp tags found"

echo -e "\n=== Listing all tags ==="
git tag | head -20

echo -e "\n=== Checking current branch/default branch ==="
git symbolic-ref refs/remotes/origin/HEAD || echo "No symbolic ref"
git branch -a | grep "\*"

echo -e "\n=== Check remote branches ==="
git branch -r | head -10

Repository: wso2/agent-manager

Length of output: 316


Fix the broken repository URL in the deployment documentation.

The URL references a non-existent branch amp/v0 and uses underscores (travel_planner_agent) instead of spaces. The repository has no amp/v0 branch or tag; the default branch is main. Update the URL to: https://github.com/wso2/agent-manager/tree/main/samples/travel%20planner%20agent (or use the appropriate branch/tag if this documentation targets a specific release).

🤖 Prompt for AI Agents
In `@samples/travel` planner agent/README.md at line 17, Update the broken Repo
URL in the README entry that currently points to
`https://github.com/wso2/agent-manager/tree/amp/v0/samples/travel_planner_agent`;
change it to point to the correct branch and path (e.g.,
`https://github.com/wso2/agent-manager/tree/main/samples/travel%20planner%20agent`)
so it uses the `main` branch and encodes the space in "travel planner agent" (or
substitute the correct release branch/tag if this doc targets a specific
release).

- Language/runtime: Python 3.11
- Run command: `uvicorn app:app --host 0.0.0.0 --port 9090`
- Agent type: Chat API Agent
- Schema path: `openapi.yaml`
- Port: `9090`

**Agent environment variables**
Required:
- `OPENAI_API_KEY`
- `ASGARDEO_BASE_URL`
- `ASGARDEO_CLIENT_ID`
- `PINECONE_API_KEY`
- `PINECONE_SERVICE_URL`

Optional (defaults are applied if unset):
- `OPENAI_MODEL` (default: `gpt-4o-mini`)
- `OPENAI_EMBEDDING_MODEL` (default: `text-embedding-3-small`)
- `WEATHER_API_KEY`
- `WEATHER_API_BASE_URL` (default: `http://api.weatherapi.com/v1`)
- `BOOKING_API_BASE_URL` (default: `http://localhost:9091`)

**Expose the agent endpoint after deploy**
Run this inside the WSO2-AMP dev container to expose the agent on `localhost:9090`:

```bash
kubectl -n dp-default-default-default-ccb66d74 port-forward svc/travel-planner-agent-is 9090:80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the service name to match the hotel booking agent.

The kubectl command references travel-planner-agent-is, but this README is for the Hotel Booking Agent. Update the service name to match the actual deployed service name.

📝 Suggested fix
-kubectl -n dp-default-default-default-ccb66d74 port-forward svc/travel-planner-agent-is 9090:80
+kubectl -n dp-default-default-default-ccb66d74 port-forward svc/hotel-booking-agent-is 9090:80
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kubectl -n dp-default-default-default-ccb66d74 port-forward svc/travel-planner-agent-is 9090:80
kubectl -n dp-default-default-default-ccb66d74 port-forward svc/hotel-booking-agent-is 9090:80
🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/README.md` at line 42, The kubectl port-forward
command in README references the wrong service name "travel-planner-agent-is";
update that service name to the Hotel Booking Agent service (e.g., replace
"travel-planner-agent-is" with "hotel-booking-agent-is") so the line reads the
correct kubectl port-forward invocation using svc/hotel-booking-agent-is; ensure
the port-forward command still maps 9090:80 and keep the namespace unchanged.

```

**Hotel API**
- Runs locally on `http://localhost:9091` when started via `uvicorn`.
- You can also deploy it to a cloud host; just point the agent configuration at the deployed base URL.

**Pinecone policies**
- Create a Pinecone index using your preferred embedding model.
- Set the Pinecone and embedding configurations when deploying or locally running the hotel api
- Run the ingest to populate the index.

### Local services (Agent + Hotel API)
#### 1) Start the agent (local)
```bash
cd samples/hotel-booking-agent/agent
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn app:app --host 0.0.0.0 --port 9090
```

#### 2) Start the Hotel API (local)
```bash
cd samples/hotel-booking-agent/services/hotel_api
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn service:app --host 0.0.0.0 --port 9091
```

### Sample chat request
```bash
curl -s http://localhost:9090/chat \
-H "Content-Type: application/json" \
-d '{"message":"Plan a 3-day trip to Tokyo","sessionId":"session_abc123","userId":"user_123","userName":"Traveler"}'
```

## Notes
- The agent serves chat at `http://localhost:9090/chat`.
14 changes: 14 additions & 0 deletions samples/hotel-booking-agent/agent/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
OPENAI_API_KEY=
OPENAI_MODEL=

BOOKING_API_BASE_URL=

ASGARDEO_BASE_URL=
ASGARDEO_CLIENT_ID=

PINECONE_API_KEY=
PINECONE_SERVICE_URL=
PINECONE_INDEX_NAME=

WEATHER_API_KEY=
WEATHER_API_BASE_URL=
Comment on lines +1 to +11
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing OPENAI_EMBEDDING_MODEL variable.

The template is missing OPENAI_EMBEDDING_MODEL which is used in config.py (Line 38). While it has a default value, including it in the template helps users discover all configurable options.

📝 Proposed fix
 OPENAI_API_KEY=
 OPENAI_MODEL=
+OPENAI_EMBEDDING_MODEL=
 
 HOTEL_API_BASE_URL=
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
OPENAI_API_KEY=
OPENAI_MODEL=
HOTEL_API_BASE_URL=
PINECONE_API_KEY=
PINECONE_SERVICE_URL=
PINECONE_INDEX_NAME=
WEATHER_API_KEY=
WEATHER_API_BASE_URL=
OPENAI_API_KEY=
OPENAI_MODEL=
OPENAI_EMBEDDING_MODEL=
HOTEL_API_BASE_URL=
PINECONE_API_KEY=
PINECONE_SERVICE_URL=
PINECONE_INDEX_NAME=
WEATHER_API_KEY=
WEATHER_API_BASE_URL=
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 8-8: [UnorderedKey] The PINECONE_INDEX_NAME key should go before the PINECONE_SERVICE_URL key

(UnorderedKey)


[warning] 11-11: [UnorderedKey] The WEATHER_API_BASE_URL key should go before the WEATHER_API_KEY key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/.env.example` around lines 1 - 11, Add the
missing OPENAI_EMBEDDING_MODEL variable to the .env example so users can
configure the embedding model used by the app; specifically, add a line for
OPENAI_EMBEDDING_MODEL= (set to the same default value used in config.py) so it
matches the embedding model lookup in config.py (the OPENAI_EMBEDDING_MODEL
setting) and makes the option discoverable.

84 changes: 84 additions & 0 deletions samples/hotel-booking-agent/agent/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
from __future__ import annotations

from datetime import datetime, timezone
import logging

from fastapi import FastAPI, HTTPException, Request, status
from fastapi.middleware.cors import CORSMiddleware
from langchain_core.messages import HumanMessage
from pydantic import BaseModel

from config import Settings
from graph import build_graph

logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s: %(message)s",
)

configs = Settings.from_env()
agent_graph = build_graph(configs)

class ChatRequest(BaseModel):
message: str
sessionId: str
userId: str
userName: str | None = None


class ChatResponse(BaseModel):
message: str

app = FastAPI(title="Hotel Booking Agent")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=False,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["Content-Type", "Authorization", "Accept", "x-user-id"],
max_age=84900,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use configured CORS origins instead of hardcoded wildcard.

Settings.from_env() populates cors_allow_origins from environment (defaulting to ["http://localhost:3001"]), but this is ignored in favor of hardcoded allow_origins=["*"]. This bypasses the intended security configuration.

🔐 Suggested fix
 app = FastAPI(title="Hotel Booking Agent")
 app.add_middleware(
     CORSMiddleware,
-    allow_origins=["*"],
-    allow_credentials=False,
+    allow_origins=configs.cors_allow_origins,
+    allow_credentials=configs.cors_allow_credentials,
     allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
     allow_headers=["Content-Type", "Authorization", "Accept", "x-user-id"],
     max_age=84900,
 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=False,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["Content-Type", "Authorization", "Accept", "x-user-id"],
max_age=84900,
)
app.add_middleware(
CORSMiddleware,
allow_origins=configs.cors_allow_origins,
allow_credentials=configs.cors_allow_credentials,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["Content-Type", "Authorization", "Accept", "x-user-id"],
max_age=84900,
)
🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/app.py` around lines 33 - 40, The CORS
middleware is using a hardcoded allow_origins=["*"] which bypasses the
configured settings; change the app.add_middleware call to read origins from
Settings.from_env().cors_allow_origins (or the Settings instance you create) and
pass that list into CORSMiddleware's allow_origins; ensure you still set
allow_credentials, allow_methods, allow_headers, and max_age as before and
handle the case where cors_allow_origins may be None or empty by falling back to
the default list from Settings.cors_allow_origins.



def _wrap_user_message(user_message: str, user_id: str, user_name: str | None) -> str:
now = datetime.now(timezone.utc).isoformat()
resolved_user_id = user_id
resolved_user_name = user_name or "Traveler"
return (
f"User Name: {resolved_user_name}\n"
f"User Context (non-hotel identifiers): {resolved_user_name} ({resolved_user_id})\n"
f"UTC Time now:\n{now}\n\n"
f"User Query:\n{user_message}"
)
Comment on lines 31 to 41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mask email-derived names before sending to the LLM.

user_name can fall back to the email claim; avoid embedding emails in prompts to reduce PII exposure.

🛡️ Suggested masking
-    resolved_user_name = user_name or "Traveler"
+    resolved_user_name = user_name or "Traveler"
+    if "@" in resolved_user_name:
+        resolved_user_name = "Traveler"
🤖 Prompt for AI Agents
In `@samples/travel` planner agent/backend/agent/app.py around lines 43 - 52,
_user_name and user_id may contain full email addresses and should be masked
before embedding in prompts; update the _wrap_user_message function to detect
email patterns in user_name and user_id, replace or obfuscate the domain (e.g.,
keep local-part only or replace with "[redacted]" or asterisks) and use those
masked values (e.g., masked_user_name, masked_user_id) when composing the "User
Name" and "User Context" lines so no raw email is sent to the LLM.



def _extract_user_from_payload(request: ChatRequest) -> tuple[str, str | None]:
user_id = request.userId
if not user_id:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Missing userId in request payload.",
)
return user_id, request.userName


@app.post("/chat", response_model=ChatResponse)
def chat(request: ChatRequest, http_request: Request) -> ChatResponse:
session_id = request.sessionId
user_id, user_name = _extract_user_from_payload(request)
Comment on lines +54 to +57
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n samples/hotel-booking-agent/agent/app.py | head -80

Repository: wso2/agent-manager

Length of output: 2658


🏁 Script executed:

rg -n "def _extract_user_from_payload|auth|jwt|JWT|Authorization|Depends" samples/hotel-booking-agent/agent/app.py

Repository: wso2/agent-manager

Length of output: 144


🏁 Script executed:

rg -n "middleware|CORS" samples/hotel-booking-agent/agent/app.py

Repository: wso2/agent-manager

Length of output: 44


🌐 Web query:

FastAPI JWT authentication best practices 2025

💡 Result:

FastAPI JWT auth best practices (2025-era)

  • Prefer standards-based flows: use OAuth2/OIDC concepts (even if self-hosted) and follow the current OAuth security BCP guidance (deprecates weaker modes, updates threat model). [4]
  • Use short-lived access tokens + refresh tokens: keep access JWT lifetime small (minutes) and renew via refresh. [6]
  • Store refresh tokens server-side (recommended) and rotate them: JWT access tokens are hard to revoke; use refresh-token rotation and revocation on logout, typically by storing refresh-token state (DB/Redis) so you can invalidate sessions. [1][2]
  • Browser apps: avoid localStorage for tokens:
    • Common pattern: access token in memory (client sets Authorization: Bearer …), refresh token in HttpOnly cookie, with a dedicated refresh endpoint—this reduces CSRF (auth header can’t be set by a CSRF form) and limits token theft via XSS compared to storage APIs. [7]
    • If you put JWTs in cookies for auth, add CSRF protection and use Secure + appropriate SameSite. [2][8]
  • Validate JWTs defensively (don’t “just decode”):
    • Pin allowed algorithms and reject unexpected alg (avoid “none” / alg-confusion classes of issues). [3][5]
    • Validate core claims: exp, and where applicable iss/key binding and aud (reject if missing/wrong in multi-audience setups). [3]
  • Keep JWT contents minimal: don’t put secrets/PII in JWT claims; treat claims as attacker-controlled input until signature/claims checks pass. (JWT BCP guidance). [3]
  • Key management: use strong secrets / keys, rotate keys, and plan for cryptographic agility (ability to change algorithms/keys over time). [3][6]
  • Use a maintained auth library when possible (reduces foot-guns): e.g., FastAPI Users supports bearer vs cookie transports and strategies including JWT vs DB/Redis-backed tokens (revocable). [2]

Sources

[1] FastAPI Users – Authentication configuration https://fastapi-users.github.io/fastapi-users/9.3/configuration/authentication/
[2] FastAPI Users – Authentication methods/strategies (JWT vs DB/Redis; cookie needs CSRF) https://fastapi-users.github.io/fastapi-users/9.3/configuration/authentication/
[3] RFC 8725 – JSON Web Token Best Current Practices https://www.rfc-editor.org/rfc/rfc8725.html
[4] RFC 9700 (BCP 240) – Best Current Practice for OAuth 2.0 Security (Jan 2025) https://www.rfc-editor.org/info/rfc9700
[5] OWASP Cheat Sheet – JWT (alg verification / “none” risks) https://cheatsheetseries.owasp.org/cheatsheets/JSON_Web_Token_for_Java_Cheat_Sheet.html
[6] FastAPI docs – OAuth2 Password + Bearer JWT tutorial (secret generation, JWT basics) https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/
[7] FastAPI Discussion #9142 – Cookie refresh + in-memory access token + Authorization header (CSRF/XSS considerations) fastapi/fastapi#9142
[8] fastapi-jwt-auth docs – JWT in cookies + CSRF protection explanation https://indominusbyte.github.io/fastapi-jwt-auth/usage/jwt-in-cookies/


No authentication: user identity is client-controlled.

The /chat endpoint accepts user_id directly from the request payload with no authentication or authorization. Any client can impersonate any user by providing an arbitrary user_id, potentially accessing or modifying another user's bookings and session data.

For a booking system handling reservations, implement authentication (e.g., OAuth2/OIDC with JWT tokens from an identity provider) to verify the caller's identity before processing requests.

🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/app.py` around lines 54 - 57, The chat
endpoint currently trusts client-supplied identity in ChatRequest (session_id
handling and user info extracted via _extract_user_from_payload), which allows
impersonation; change the flow to require and validate an authentication token
(e.g., OAuth2/OIDC JWT) before using request payload user fields: add token
extraction/validation middleware or dependency for the chat route, decode/verify
the JWT and derive user_id and user_name from the token claims instead of
_extract_user_from_payload, reject requests where no valid token is present or
the token does not authorize access to the given session_id, and ensure
authorization checks (session ownership or ACL) run in the chat() handler (or an
auth dependency) to prevent cross-user access.

wrapped_message = _wrap_user_message(
request.message,
user_id,
user_name,
)
thread_id = f"{user_id}:{session_id}"
result = agent_graph.invoke(
{"messages": [HumanMessage(content=wrapped_message)]},
config={
"recursion_limit": 50,
"configurable": {"thread_id": thread_id},
},
)
Comment on lines 54 to 70
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate sessionId is non-empty to avoid malformed thread IDs.

While sessionId is required by Pydantic and won't be None, an empty string "" passes validation, resulting in a thread ID like "user_id:". Consider validating non-empty or using a fallback.

💡 Suggested validation
 `@app.post`("/chat", response_model=ChatResponse)
 def chat(request: ChatRequest, http_request: Request) -> ChatResponse:
     session_id = request.sessionId
+    if not session_id:
+        raise HTTPException(
+            status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
+            detail="Missing sessionId in request payload.",
+        )
     user_id, user_name = _extract_user_from_payload(request)

Alternatively, add Pydantic field validation:

from pydantic import field_validator

class ChatRequest(BaseModel):
    message: str
    sessionId: str
    userId: str
    userName: str | None = None

    `@field_validator`("sessionId", "userId")
    `@classmethod`
    def must_be_non_empty(cls, v: str) -> str:
        if not v.strip():
            raise ValueError("must be non-empty")
        return v
🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/app.py` around lines 66 - 82, The chat
endpoint builds a thread_id using sessionId which can be an empty string; update
validation to ensure sessionId (and userId) are non-empty before constructing
thread_id in the chat function (or add a safe fallback) — e.g., enforce
non-empty via Pydantic validators on ChatRequest (validate sessionId and userId
with a field_validator/method) or add an explicit check in chat after extracting
session_id and user_id to raise an error or substitute a fallback value before
forming thread_id.


last_message = result["messages"][-1]
return ChatResponse(message=last_message.content)
Comment on lines +72 to +73
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against empty messages list.

If agent_graph.invoke returns an empty messages list, accessing result["messages"][-1] raises an IndexError. Add a defensive check.

🛡️ Suggested fix
-    last_message = result["messages"][-1]
-    return ChatResponse(message=last_message.content)
+    messages = result.get("messages") or []
+    if not messages:
+        raise HTTPException(
+            status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
+            detail="Agent returned no response.",
+        )
+    return ChatResponse(message=messages[-1].content)
🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/app.py` around lines 84 - 85, Guard against
an empty messages list returned from agent_graph.invoke by checking
result.get("messages") and its length before indexing; if it's empty, avoid
result["messages"][-1] and return a safe default (e.g., return
ChatResponse(message="") or an explicit error response) and otherwise set
last_message = result["messages"][-1] and return
ChatResponse(message=last_message.content); update the code around the
agent_graph.invoke result handling (variables: result, last_message,
ChatResponse) to perform this defensive check.

56 changes: 56 additions & 0 deletions samples/hotel-booking-agent/agent/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
import os
from dataclasses import dataclass
from dotenv import load_dotenv

load_dotenv()

def _split_csv(value: str | None, default: list[str]) -> list[str]:
if value is None:
return default
stripped = [item.strip() for item in value.split(",")]
return [item for item in stripped if item]


@dataclass
class Settings:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use this instead you load it manually: https://docs.pydantic.dev/latest/concepts/pydantic_settings/

openai_api_key: str
openai_model: str
openai_embedding_model: str
asgardeo_base_url: str
asgardeo_client_id: str
pinecone_api_key: str
pinecone_service_url: str
pinecone_index_name: str
weather_api_key: str | None
weather_api_base_url: str
booking_api_base_url: str
cors_allow_origins: list[str]
cors_allow_credentials: bool

@classmethod
def from_env(cls) -> "Settings":
def required(name: str) -> str:
value = os.getenv(name)
if not value:
raise ValueError(f"Missing required env var: {name}")
return value
asgardeo_base_url = required("ASGARDEO_BASE_URL")
asgardeo_client_id = required("ASGARDEO_CLIENT_ID")
return cls(
openai_api_key=required("OPENAI_API_KEY"),
openai_model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
openai_embedding_model=os.getenv("OPENAI_EMBEDDING_MODEL", "text-embedding-3-small"),
asgardeo_base_url=asgardeo_base_url,
asgardeo_client_id=asgardeo_client_id,
pinecone_api_key=required("PINECONE_API_KEY"),
pinecone_service_url=required("PINECONE_SERVICE_URL"),
pinecone_index_name=os.getenv("PINECONE_INDEX_NAME", "hotel-policies"),
weather_api_key=os.getenv("WEATHER_API_KEY"),
weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "http://api.weatherapi.com/v1"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

WeatherAPI official base URL HTTPS endpoint documentation

💡 Result:

WeatherAPI.com’s official API base URL is:

  • https://api.weatherapi.com/v1 (same host/path as the docs’ base URL; HTTPS is supported) [1], [2]

Example HTTPS request:

  • GET https://api.weatherapi.com/v1/current.json?key=YOUR_API_KEY&q=London [1]

Sources:
[1] WeatherAPI.com Docs (Request / Base URL)
[2] weatherapicom/go package docs (All URIs relative to https://api.weatherapi.com/v1)


Use HTTPS for the Weather API default base URL.

The default uses HTTP, which exposes API keys over plaintext. WeatherAPI's official endpoint is https://api.weatherapi.com/v1 and should be used as the default.

🔧 Proposed fix
-            weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "http://api.weatherapi.com/v1"),
+            weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "https://api.weatherapi.com/v1"),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "http://api.weatherapi.com/v1"),
weather_api_base_url=os.getenv("WEATHER_API_BASE_URL", "https://api.weatherapi.com/v1"),
🤖 Prompt for AI Agents
In `@samples/travel` planner agent/backend/agent/config.py at line 49, The default
Weather API base URL currently uses plaintext HTTP; update the default for
weather_api_base_url (the os.getenv call supplying "WEATHER_API_BASE_URL") to
use the secure HTTPS endpoint "https://api.weatherapi.com/v1" so API keys and
traffic are encrypted by default.

booking_api_base_url=os.getenv("BOOKING_API_BASE_URL", "http://localhost:9091"),
cors_allow_origins=_split_csv(
os.getenv("CORS_ALLOW_ORIGINS"),
["http://localhost:3001"],
),
cors_allow_credentials=os.getenv("CORS_ALLOW_CREDENTIALS", "true").lower() == "true",
)
80 changes: 80 additions & 0 deletions samples/hotel-booking-agent/agent/graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
from __future__ import annotations

import logging
from typing import Annotated, TypedDict

from langchain_core.messages import BaseMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import InMemorySaver


from config import Settings
from tools import build_tools

logger = logging.getLogger(__name__)

SYSTEM_PROMPT = """You are an assistant for planning trip itineraries of a hotel listing company.
Help users plan their perfect trip, considering preferences and available hotels.

Instructions:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you organize this. The instructions sections is too vague, try to separate the tool guidance, formatting instructions. Use separate sections for each tool.

Also you do not have to provide detailed tool level descriptions, since they will be part of the tool description. See my other comment in tools.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- Match hotels near attractions with user interests when prioritizing hotels.
- You may plan itineraries with multiple hotels based on user interests and attractions.
- Include the hotel and things to do for each day in the itinerary.
- Use markdown formatting in non-hotel-search answers. Include hotel photos if available.
- Always call get_user_profile_tool first to retrieve personalization data.
- If the user explicitly asks to book, call create_booking_tool using available hotel/room data.
- When calling create_booking_tool, include pricePerNight for each room from availability results.
- If the user asks to edit or modify a booking, call edit_booking_tool with the bookingId.
- If the user asks to cancel a booking, call cancel_booking_tool with the bookingId.
- If the user asks to list or view bookings, call list_bookings_tool with the userId from context. Filter by status when asked (available/my bookings => CONFIRMED, cancelled => CANCELLED, all => ALL).
- If booking details are missing (hotelId, roomId, dates, guests, or primary guest contact info), ask a concise follow-up question instead of making up data. Use bullet points for the missing fields and list available room options as bullets when asking for a room selection.
- Do not claim a booking failed unless the booking tool returns an error.
- If a booking attempt fails, ask a concise follow-up to retry with corrected details or an alternative hotel.
- After a successful booking tool response, provide the final user response and do not call more tools.
- When listing past bookings, use hotelName when available; otherwise fall back to hotelId.
- For hotel policy questions, always call query_hotel_policy_tool with the hotel name or id.
- Do not answer policy questions from hotel search/details responses or dataset fields.
- Use resolve_relative_dates_tool to resolve phrases like tomorrow, this weekend, next Friday into ISO dates. If ambiguity remains, ask a clarifying question and do not guess.
- For availability responses, format each room with: Room Type, Price per night, Max Occupancy.
- Prefer this discovery flow for hotels: call search_hotels_tool even if dates are missing, rank/summarize, ask for dates if missing.
- When the user asks about a specific hotel, resolve hotelId then call get_hotel_info_tool.
- For hotel search results or single-hotel details, return only HOTEL_RESULTS_JSON followed by valid JSON.
- Do not output raw tool traces, internal reasoning, markdown headings, or code fences."""


class AgentState(TypedDict):
messages: Annotated[list[BaseMessage], add_messages]


def build_graph(configs: Settings):
tools = build_tools(configs)
llm = ChatOpenAI(
model=configs.openai_model,
api_key=configs.openai_api_key,
).bind_tools(tools)

def agent_node(state: AgentState) -> AgentState:
messages = [SystemMessage(content=SYSTEM_PROMPT)] + state["messages"]
response = llm.invoke(messages)
tool_calls = getattr(response, "tool_calls", None) or []
Comment on lines +36 to +46
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's check the project structure and dependencies to understand the langchain-openai version
fd -type f -name "requirements.txt" -o -name "pyproject.toml" -o -name "setup.py" | head -20

Repository: wso2/agent-manager

Length of output: 232


🏁 Script executed:

# Let's also examine the graph.py file more closely to see if there's any existing timeout config
cat -n samples/hotel-booking-agent/agent/graph.py | head -60

Repository: wso2/agent-manager

Length of output: 2923


🌐 Web query:

ChatOpenAI timeout max_retries parameters langchain-openai documentation

💡 Result:

In langchain-openai (Python), ChatOpenAI exposes these client parameters:

Example from the LangChain reference docs:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4o",
    timeout=30,      # seconds (or httpx.Timeout / (connect, read))
    max_retries=2,
)

(reference.langchain.com)

Note: you may also see request_timeout documented on BaseChatOpenAI as “timeout for requests” (type-compatible with float / httpx.Timeout / None). (reference.langchain.com)

Citations:


Add explicit timeout and retry configuration to the ChatOpenAI client.

The LLM call lacks explicit timeout and retry settings, which can cause indefinite hangs during network issues. Both timeout and max_retries are supported parameters in langchain-openai.

Suggested change
     llm = ChatOpenAI(
         model=settings.openai_model,
         api_key=settings.openai_api_key,
+        timeout=30,
+        max_retries=2,
     ).bind_tools(tools)
🤖 Prompt for AI Agents
In `@samples/hotel-booking-agent/agent/graph.py` around lines 36 - 46, The
ChatOpenAI client created in build_graph lacks explicit timeout and retry
settings, which can cause hangs; update the ChatOpenAI instantiation in
build_graph (the llm variable) to include timeout and max_retries parameters
(e.g., timeout=settings.openai_timeout, max_retries=settings.openai_max_retries
or sensible defaults) so the LLM calls from agent_node will fail fast and retry
deterministically; ensure the settings names exist or add them and propagate
these values into the ChatOpenAI(...) call.

if tool_calls:
tool_names = [call.get("name") for call in tool_calls if isinstance(call, dict)]
logger.debug("agent_node decided to call tools: %s", tool_names)
else:
logger.debug("agent_node returned a final response (no tool calls).")
return {"messages": [response]}

graph = StateGraph(AgentState) #add in memory server
graph.add_node("agent", agent_node)
graph.add_node("tools", ToolNode(tools))

# Remove the mapping - tools_condition returns "tools" or END automatically
graph.add_conditional_edges("agent", tools_condition)
graph.add_edge("tools", "agent")
graph.set_entry_point("agent")

checkpointer = InMemorySaver()
return graph.compile(checkpointer=checkpointer)
Loading