Skip to content

Commit 9bc14a8

Browse files
clmnthanouticelina
andauthored
nitpicks (#1866)
* nitpicks update name (following the way they write it) + add a mention to the playground * update title --------- Co-authored-by: Celina Hanouti <[email protected]>
1 parent cf39f04 commit 9bc14a8

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

docs/inference-providers/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
- local: guides/function-calling
5454
title: Function Calling
5555
- local: guides/gpt-oss
56-
title: How to use OpenAI's GPT OSS
56+
title: How to use OpenAI gpt-oss
5757

5858

5959
- title: API Reference

docs/inference-providers/guides/gpt-oss.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# How to use OpenAI's GPT OSS
1+
# How to use OpenAI gpt-oss
22

33
<div class="flex justify-center">
44
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/gpt-oss-thumbnail-light.png"/>
55
</div>
66

7-
This guide walks you through using OpenAI's latest GPT OSS models with Hugging Face Inference Providers. GPT OSS is an open-weights family built for strong reasoning, agentic workflows and versatile developer use cases, and it comes in two sizes: a one with 120B parameters ([gpt-oss-120b](https://hf.co/openai/gpt-oss-120b)), and a smaller one with 20B parameters ([gpt-oss-20b](https://hf.co/openai/gpt-oss-120b)).
7+
This guide walks you through using OpenAI's latest gpt-oss models with Hugging Face Inference Providers which powers the official OpenAI playground ([gpt-oss.com](https://gpt-oss.com)). gpt-oss is an open-weights family built for strong reasoning, agentic workflows and versatile developer use cases, and it comes in two sizes: a one with 120B parameters [gpt-oss-120b](https://hf.co/openai/gpt-oss-120b), and a smaller one with 20B parameters ([gpt-oss-20b](https://hf.co/openai/gpt-oss-120b)).
88

99
Both models are supported on Inference Providers and can be accessed through either the OpenAI-compatible [Chat Completions API](https://platform.openai.com/docs/api-reference/chat/completions), or the more advanced [Responses API](https://platform.openai.com/docs/api-reference/responses).
1010

@@ -39,7 +39,7 @@ npm install openai
3939
</hfoptions>
4040

4141
## Chat Completion
42-
Getting started with GPT OSS models on Inference Providers is simple and straightforward. The OpenAI-compatible Chat Completions API supports features like tool calling, structured outputs, streaming, and reasoning effort controls.
42+
Getting started with gpt-oss models on Inference Providers is simple and straightforward. The OpenAI-compatible Chat Completions API supports features like tool calling, structured outputs, streaming, and reasoning effort controls.
4343

4444
Here's a basic example using [gpt-oss-120b](https://hf.co/openai/gpt-oss-120b) through the fast Cerebras provider:
4545

@@ -282,7 +282,7 @@ console.log(parsedOutput);
282282
</hfoption>
283283
</hfoptions>
284284

285-
With just a few lines of code, you can start using GPT OSS models with Hugging Face Inference Providers, fully OpenAI API-compatible, easy to integrate, and ready out of the box!
285+
With just a few lines of code, you can start using gpt-oss models with Hugging Face Inference Providers, fully OpenAI API-compatible, easy to integrate, and ready out of the box!
286286

287287
## Responses API
288288

@@ -566,5 +566,5 @@ response.output.forEach((item, index) => {
566566
</hfoption>
567567
</hfoptions>
568568

569-
That's it! With the Responses API on Inference Providers, you get fine-grained control over powerful open-weight models like GPT OSS, including streaming, tool calling, and remote MCP, making it ideal for building reliable, agent-driven applications.
569+
That's it! With the Responses API on Inference Providers, you get fine-grained control over powerful open-weight models like gpt-oss, including streaming, tool calling, and remote MCP, making it ideal for building reliable, agent-driven applications.
570570

0 commit comments

Comments
 (0)