Skip to content

Commit 42685d2

Browse files
Fix snippets in gpt-oss guide (#1861)
* fix-snippets * nit
1 parent a25406d commit 42685d2

File tree

1 file changed

+12
-7
lines changed

1 file changed

+12
-7
lines changed

docs/inference-providers/guides/gpt-oss.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Getting started with GPT OSS models on Inference Providers is simple and straigh
4343

4444
Here's a basic example using [gpt-oss-120b](https://hf.co/openai/gpt-oss-120b) through the fast Cerebras provider:
4545

46-
<hfoptions>
46+
<hfoptions id="simple">
4747
<hfoption id="python">
4848

4949
```python
@@ -64,6 +64,7 @@ print(response.choices[0].message.content)
6464
```
6565

6666
</hfoption>
67+
6768
<hfoption id="javascript">
6869

6970
```ts
@@ -86,7 +87,7 @@ console.log(response.choices[0].message.content);
8687

8788
You can also give the model access to tools. Below, we define a `get_current_weather` function and let the model decide whether to call it:
8889

89-
<hfoptions>
90+
<hfoptions id="tool-call">
9091
<hfoption id="python">
9192

9293
```python
@@ -131,6 +132,7 @@ print(response.choices[0].message)
131132
```
132133

133134
</hfoption>
135+
134136
<hfoption id="javascript">
135137

136138
```ts
@@ -178,7 +180,7 @@ console.log(response.choices[0].message);
178180

179181
For structured tasks like data extraction, you can force the model to return a valid JSON object using the `response_format` parameter. We use the Fireworks AI provider.
180182

181-
<hfoptions>
183+
<hfoptions id="structured">
182184
<hfoption id="python">
183185

184186
```python
@@ -301,7 +303,7 @@ The implementation is based on the open-source [huggingface/responses.js](https:
301303

302304
Unlike traditional text streaming, the Responses API uses a system of semantic events for streaming. This means the stream is not just raw text, but a series of structured event objects. Each event has a type, so you can listen for the specific events you care about, such as content being added (`output_text.delta`) or the message being completed (`completed). The example below shows how to iterate through these events and print the content as it arrives.
303305

304-
<hfoptions>
306+
<hfoptions id="stream">
305307
<hfoption id="python">
306308

307309
```python
@@ -327,6 +329,7 @@ for event in stream:
327329
```
328330

329331
</hfoption>
332+
330333
<hfoption id="javascript">
331334

332335
```ts
@@ -357,7 +360,7 @@ for await (const event of stream) {
357360

358361
You can extend the model with tools to access external data. The example below defines a get_current_weather function that the model can choose to call.
359362

360-
<hfoptions>
363+
<hfoptions id="tool-call-resp">
361364
<hfoption id="python">
362365

363366
```python
@@ -399,6 +402,7 @@ print(response)
399402
```
400403

401404
</hfoption>
405+
402406
<hfoption id="javascript">
403407

404408
```ts
@@ -445,7 +449,7 @@ console.log(response);
445449

446450
The API's most advanced feature is Remote MCP calls, which allow the model to delegate tasks to external services. Calling a remote MCP server with the Responses API is straightforward. For example, here's how you can use the DeepWiki MCP server to ask questions about nearly any public GitHub repository.
447451

448-
<hfoptions>
452+
<hfoptions id="mcp">
449453
<hfoption id="python">
450454

451455
```python
@@ -474,6 +478,7 @@ print(response)
474478
```
475479

476480
</hfoption>
481+
477482
<hfoption id="javascript">
478483

479484
```ts
@@ -508,7 +513,7 @@ console.log(response);
508513

509514
You can also control the model's "thinking" time with the `reasoning` parameter. The following example nudges the model to spend a medium amount of effort on the answer.
510515

511-
<hfoptions>
516+
<hfoptions id="reasoning">
512517
<hfoption id="python">
513518

514519
```python

0 commit comments

Comments
 (0)