Skip to content

[Bug] Can't use builtin browser tool for GPTOssΒ #9390

@Hannibal046

Description

@Hannibal046

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

The request with tools=[{"type": "web_search_preview"}], would consistantly report error:

---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
Cell In[32], line 1
----> 1 response = client.responses.create(
      2     model="/hf3fs-hg/prod/deepseek/public_models/openai/gpt-oss-120b/main/",
      3     input="Who is the president of South Korea as of now?",
      4     tools=[{
      5         "type": "web_search_preview"
      6     }],
      7     reasoning={'effort':'high'}
      8 )

File /hf_shared/hfai_envs/useragi/server25_0/lib/python3.12/site-packages/openai/_utils/_utils.py:279, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
    277             msg = f"Missing required argument: {quote(missing[0])}"
    278     raise TypeError(msg)
--> 279 return func(*args, **kwargs)

File /hf_shared/hfai_envs/useragi/server25_0/lib/python3.12/site-packages/openai/resources/responses/responses.py:603, in Responses.create(self, input, model, include, instructions, max_output_tokens, metadata, parallel_tool_calls, previous_response_id, reasoning, store, stream, temperature, text, tool_choice, tools, top_p, truncation, user, extra_headers, extra_query, extra_body, timeout)
    574 @required_args(["input", "model"], ["input", "model", "stream"])
    575 def create(
    576     self,
   (...)    601     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
    602 ) -> Response | Stream[ResponseStreamEvent]:
--> 603     return self._post(
    604         "/responses",
    605         body=maybe_transform(
    606             {
    607                 "input": input,
    608                 "model": model,
    609                 "include": include,
    610                 "instructions": instructions,
    611                 "max_output_tokens": max_output_tokens,
    612                 "metadata": metadata,
    613                 "parallel_tool_calls": parallel_tool_calls,
    614                 "previous_response_id": previous_response_id,
    615                 "reasoning": reasoning,
    616                 "store": store,
    617                 "stream": stream,
    618                 "temperature": temperature,
    619                 "text": text,
    620                 "tool_choice": tool_choice,
    621                 "tools": tools,
    622                 "top_p": top_p,
    623                 "truncation": truncation,
    624                 "user": user,
    625             },
    626             response_create_params.ResponseCreateParams,
    627         ),
    628         options=make_request_options(
    629             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
    630         ),
    631         cast_to=Response,
    632         stream=stream or False,
    633         stream_cls=Stream[ResponseStreamEvent],
    634     )

File /hf_shared/hfai_envs/useragi/server25_0/lib/python3.12/site-packages/openai/_base_client.py:1242, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
   1228 def post(
   1229     self,
   1230     path: str,
   (...)   1237     stream_cls: type[_StreamT] | None = None,
   1238 ) -> ResponseT | _StreamT:
   1239     opts = FinalRequestOptions.construct(
   1240         method="post", url=path, json_data=body, files=to_httpx_files(files), **options
   1241     )
-> 1242     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File /hf_shared/hfai_envs/useragi/server25_0/lib/python3.12/site-packages/openai/_base_client.py:919, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    916 else:
    917     retries_taken = 0
--> 919 return self._request(
    920     cast_to=cast_to,
    921     options=options,
    922     stream=stream,
    923     stream_cls=stream_cls,
    924     retries_taken=retries_taken,
    925 )

File /hf_shared/hfai_envs/useragi/server25_0/lib/python3.12/site-packages/openai/_base_client.py:1023, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
   1020         err.response.read()
   1022     log.debug("Re-raising status error")
-> 1023     raise self._make_status_error_from_response(err.response) from None
   1025 return self._process_response(
   1026     cast_to=cast_to,
   1027     options=options,
   (...)   1031     retries_taken=retries_taken,
   1032 )

BadRequestError: Error code: 400 - {'object': 'error', 'message': 'Unexpected token 200005 while expecting start token 200006', 'type': 'BadRequestError', 'param': None, 'code': 400}

Server side log:

[2025-08-20 16:01:44 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 481, token usage: 0.00, #running-req: 0, #queue-req: 0, 
[2025-08-20 16:01:44 TP0] Decode batch. #running-req: 1, #token: 489, token usage: 0.00, cuda graph: True, gen throughput (token/s): 2.38, #queue-req: 0, 
[2025-08-20 16:01:44 TP0] Decode batch. #running-req: 1, #token: 529, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.62, #queue-req: 0, 
[2025-08-20 16:01:44 TP0] Decode batch. #running-req: 1, #token: 569, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.99, #queue-req: 0, 
[2025-08-20 16:01:45 TP0] Decode batch. #running-req: 1, #token: 609, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.71, #queue-req: 0, 
[2025-08-20 16:01:45 TP0] Decode batch. #running-req: 1, #token: 649, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.62, #queue-req: 0, 
[2025-08-20 16:01:45 TP0] Decode batch. #running-req: 1, #token: 689, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.51, #queue-req: 0, 
[2025-08-20 16:01:45 TP0] Decode batch. #running-req: 1, #token: 729, token usage: 0.00, cuda graph: True, gen throughput (token/s): 261.23, #queue-req: 0, 
[2025-08-20 16:01:49 TP0] Prefill batch. #new-seq: 1, #new-token: 1116, #cached-token: 703, token usage: 0.00, #running-req: 0, #queue-req: 0, 
[2025-08-20 16:01:49 TP0] Decode batch. #running-req: 1, #token: 1844, token usage: 0.00, cuda graph: True, gen throughput (token/s): 9.56, #queue-req: 0, 
[2025-08-20 16:01:49 TP0] Decode batch. #running-req: 1, #token: 1884, token usage: 0.00, cuda graph: True, gen throughput (token/s): 262.40, #queue-req: 0, 
[2025-08-20 16:01:49 TP0] Decode batch. #running-req: 1, #token: 1924, token usage: 0.00, cuda graph: True, gen throughput (token/s): 260.42, #queue-req: 0, 
[2025-08-20 16:01:50] INFO:     10.212.80.233:55110 - "POST /v1/responses HTTP/1.1" 400 Bad Request

Reproduction

pip install gpt-oss

EXA_API_KEY="" python3 -m sglang.launch_server --model openai/gpt-oss-120b --host 0.0.0.0 --port 8000 --tool-server demo  --tp 8
import openai
client = openai.Client(
    api_key="empty",
    base_url=base_url
)

response = client.responses.create(
    model="openai/gpt-oss-120b",
    input="Who is the president of South Korea as of now?",
    tools=[{
        "type": "web_search_preview"
    }],
    reasoning={'effort':'high'}
)

Environment

Python: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb  6 2025, 18:56:27) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H800 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.35
CUDA Driver Version: 535.129.03
PyTorch: 2.8.0+cu128
sglang: 0.5.0rc2
sgl_kernel: 0.3.6.post1
flashinfer_python: 0.2.11.post3
triton: 3.4.0
transformers: 4.55.2
torchao: 0.9.0
numpy: 2.2.4
aiohttp: 3.12.15
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.34.4
interegular: 0.3.3
modelscope: 1.29.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 25.1.2
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.23
openai: 1.99.1
tiktoken: 0.11.0
anthropic: 0.64.0
litellm: Module Not Found
decord: 0.6.0

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions