-
Notifications
You must be signed in to change notification settings - Fork 480
Open
Description
What happened
I’m running examples/tau-bench/generate_with_tau.py to validate the episode logging pipeline on a local machine (no GPU / no triton).
The run previously aborted due to:
- user simulation calling external LLM (litellm/gemini) → 429 / API key issues
- tool parser API mismatch (
parse_tools()signature differences across versions) - a small bug in response dict access (
output.get[...]vsoutput.get(...))
Expected
- Be able to run 1 task end-to-end without external API keys (at least for validating logging + control flow).
- Robust tool parsing fallback if tool parser is unavailable or signature differs.
Proposed changes
- Offline user sim
- Allow
user_model_provider="stub"(orprovider="stub") so env step doesn’t call external LLM - Returns a canned user message like
"(stub user) OK." - Cost = 0
- Tool parsing compatibility
Add a wrapper inopenai_tool_adapter.pythat:
- detects
parse_toolssignature (2-arg vs 3-arg) - supports different return schemas (e.g. missing
normal_text) - falls back to “no tool calls” when parser unavailable
- Bugfix
Use:
raw_response = output.get("text", "")(currently it is output.get["text"])
Repro
- Start local sglang http endpoint (or stub server) on 127.0.0.1:30000
- Environment: macOS, CPU-only, no triton installed
- Sanity check:
curl -s -X POST http://127.0.0.1:30000/generate \
-H "Content-Type: application/json" \
-d '{"text":"Say ONLY: OK123","sampling_params":{"temperature":0,"max_new_tokens":8}}'- Run the tau-bench example with user_model_provider="stub"
Result after patch
- episode.jsonl is generated and env_step proceeds with observation (stub user) OK.
- No external API calls are needed (no Gemini/OpenAI keys), so no 429.
Metadata
Metadata
Assignees
Labels
No labels