Skip to content

Commit 9528dc9

Browse files
authored
Merge branch 'main' into patch-2
2 parents fb6f818 + 513c8c6 commit 9528dc9

26 files changed

+170
-41
lines changed

.github/workflows/trufflehog.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,5 @@ jobs:
1616
fetch-depth: 0
1717
- name: Secret Scanning
1818
uses: trufflesecurity/trufflehog@main
19+
with:
20+
extra_args: --results=verified,unknown

docs/hub/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -396,6 +396,8 @@
396396
title: How to configure OIDC with Azure in the Hub
397397
- local: security-sso-entra-id-scim
398398
title: How to configure SCIM with Microsoft Entra ID (Azure AD)
399+
- local: security-sso-okta-scim
400+
title: How to configure SCIM with Okta
399401
- local: security-resource-groups
400402
title: Advanced Access Control (Resource Groups)
401403
- local: security-malware

docs/hub/enterprise-hub-scim.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,4 +30,5 @@ Once SCIM is enabled in your IdP, users and groups provisioned will appear in th
3030
## Supported Identity Providers
3131

3232
We support SCIM with any IdP that implements the SCIM 2.0 protocol. We have specific guides for some of the most popular providers:
33-
- [How to configure SCIM with Microsoft Entra ID](./security-sso-entra-id-scim)
33+
- [How to configure SCIM with Microsoft Entra ID](./security-sso-entra-id-scim)
34+
- [How to configure SCIM with Okta](./security-sso-okta-scim)

docs/hub/security-sso-entra-id-scim.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,20 @@ This feature is part of the <a href="https://huggingface.co/contact/sales?from=e
5252
| `name.formatted` | `Join(" ", [givenName], [surname])` | |
5353
| `externalId` | `objectId` | `1` |
5454

55-
3. After configuring the user mappings, go back to the Provisioning screen and click on **Provision Microsoft Entra ID Groups** to review group mappings. The default settings for groups are usually sufficient.
55+
3. The Username needs to comply with the following rules.
56+
57+
<Tip warning={true}>
58+
<ul>
59+
<li>Only regular characters and `-` are accepted in the Username.</li>
60+
<li>`--` (double dash) is forbidden.</li>
61+
<li>`-` cannot start or end the name.</li>
62+
<li>Digit-only names are not accepted.</li>
63+
<li>Minimum length is 2 and maximum length is 42.</li>
64+
<li>Username has to be unique within your org.</li>
65+
</ul>
66+
</Tip>
67+
68+
4. After configuring the user mappings, go back to the Provisioning screen and click on **Provision Microsoft Entra ID Groups** to review group mappings. The default settings for groups are usually sufficient.
5669

5770
### Step 5: Start Provisioning
5871

docs/hub/security-sso-okta-scim.md

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# How to configure SCIM with Okta
2+
3+
This guide explains how to set up SCIM user and group provisioning between Okta and your Hugging Face organization using SCIM.
4+
5+
<Tip warning={true}>
6+
This feature is part of the <a href="https://huggingface.co/contact/sales?from=enterprise" target="_blank">Enterprise Plus</a> plan.
7+
</Tip>
8+
9+
### Step 1: Get SCIM configuration from Hugging Face
10+
11+
1. Navigate to your organization's settings page on Hugging Face.
12+
2. Go to the **SSO** tab, then click on the **SCIM** sub-tab.
13+
3. Copy the **SCIM Tenant URL**. You will need this for the Okta configuration.
14+
4. Click **Generate an access token**. A new SCIM token will be generated. Copy this token immediately and store it securely, as you will not be able to see it again.
15+
16+
<div class="flex justify-center">
17+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/scim-settings.png"/>
18+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/scim-settings-dark.png"/>
19+
</div>
20+
21+
### Step 2: Enter Admin Credentials
22+
23+
1. In Okta, go to **Applications** and select your Hugging Face app.
24+
2. Go to the **General** tab and click **Edit** on App Settings
25+
3. For the Provisioning option select **SCIM**, click **Save**
26+
4. Go to the **Provisioning** tab and click **Edit**.
27+
5. Enter the **SCIM Tenant URL** as the SCIM connector base URL.
28+
6. Enter **userName** for Unique identifier field for users.
29+
7. Select all necessary actions for Supported provisioning actions.
30+
8. Select **HTTP Header** for Authentication Mode.
31+
9. Enter the **Access Token** you generated as the Authorization Bearer Token.
32+
10. Click **Test Connector Configuration** to verify the connection.
33+
11. Save your changes.
34+
35+
### Step 3: Configure Provisioning
36+
37+
1. In the **Provisioning** tab, click **To App** from the side nav.
38+
2. Click **Edit** and check to Enable all the features you need, i.e. Create, Update, Delete Users.
39+
3. Click **Save** at the bottom.
40+
41+
### Step 4: Configure Attribute Mappings
42+
1. While still in the **Provisioning** tab scroll down to Attribute Mappings section
43+
2. The default attribute mappings often require adjustments for robust provisioning. We recommend using the following configuration. You can delete attributes that are not here:
44+
45+
<div class="flex justify-center">
46+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/scim-okta-mappings.png" alt="Okta SCIM mappings"/>
47+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/sso/scim-okta-mappings-dark.png" alt="Okta SCIM mappings"/>
48+
</div>
49+
50+
### Step 5: Assign Users or Groups
51+
52+
1. Visit the **Assignments** tab, click **Assign**
53+
2. Click **Assign to People** or **Assign to Groups**
54+
3. After finding the User or Group that needs to be assigned, click **Assign** next to their name
55+
4. In the mapping modal the Username needs to be edited to comply with the following rules.
56+
57+
<Tip warning={true}>
58+
<ul>
59+
<li>Only regular characters and `-` are accepted in the Username.</li>
60+
<li>`--` (double dash) is forbidden.</li>
61+
<li>`-` cannot start or end the name.</li>
62+
<li>Digit-only names are not accepted.</li>
63+
<li>Minimum length is 2 and maximum length is 42.</li>
64+
<li>Username has to be unique within your org.</li>
65+
</ul>
66+
</Tip>
67+
68+
5. Scroll down and click **Save and Go Back**
69+
6. Click **Done**
70+
7. Confirm that users or groups are created, updated, or deactivated in your Hugging Face organization as expected.

docs/hub/spaces-custom-domain.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Spaces Custom Domain
2+
3+
4+
<Tip warning={true}>
5+
Spaces Custom Domain feature is part of PRO and Team or Enterprise subscriptions.
6+
</Tip>
7+
8+
## Getting started with a Custom Domain
9+
10+
Spaces Custom Domain is a feature that allows you to host your space in a custom domain of your choosing: `yourdomain.example.com` 🚀 The custom domain must be a valid DNS name.
11+
12+
<div class="flex justify-center">
13+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-feature_light.png"/>
14+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-feature_dark.png"/>
15+
</div>
16+
17+
## Using a Custom Domain
18+
19+
You can submit a custom domain to host your space in the settings of your Space, under "Custom Domain". You'll need to add the CNAME Record Type:
20+
21+
<div class="flex justify-center">
22+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-dns_light.png"/>
23+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-dns_dark.png"/>
24+
</div>
25+
26+
The request will move to 'pending' status after submission as seen below.
27+
28+
<div class="flex justify-center">
29+
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-pending_light.png"/>
30+
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/custom-domain-pending_dark.png"/>
31+
</div>
32+
33+
Please make sure to point the domain to `hf.space`. Once set up, you'll see a 'ready' status to know the custom domain is active for your Space 🔥
34+
35+
## Removing a Custom Domain
36+
37+
Simply remove a custom domain by using the delete button to the right of “Custom Domain” in the settings of your Space. You can delete while the custom domain is pending or in ready state.

docs/hub/storage-limits.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
At Hugging Face our intent is to provide the AI community with **free storage space for public repositories**. We do bill for storage space for **private repositories**, above a free tier (see table below).
44

5+
<Tip>
6+
Storage limits and policies apply to both model and dataset repositories on the Hub.
7+
</Tip>
8+
59
We [optimize our infrastructure](https://huggingface.co/blog/xethub-joins-hf) continuously to [scale our storage](https://x.com/julien_c/status/1821540661973160339) for the coming years of growth in Machine learning.
610

711
We do have mitigations in place to prevent abuse of free public storage, and in general we ask users and organizations to make sure any uploaded large model or dataset is **as useful to the community as possible** (as represented by numbers of likes or downloads, for instance).

docs/inference-providers/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
- local: guides/function-calling
5454
title: Function Calling
5555
- local: guides/gpt-oss
56-
title: How to use OpenAI's GPT OSS
56+
title: How to use OpenAI gpt-oss
5757

5858

5959
- title: API Reference

docs/inference-providers/guides/gpt-oss.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# How to use OpenAI's GPT OSS
1+
# How to use OpenAI gpt-oss
22

33
<div class="flex justify-center">
44
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers-guides/gpt-oss-thumbnail-light.png"/>
55
</div>
66

7-
This guide walks you through using OpenAI's latest GPT OSS models with Hugging Face Inference Providers. GPT OSS is an open-weights family built for strong reasoning, agentic workflows and versatile developer use cases, and it comes in two sizes: a one with 120B parameters ([gpt-oss-120b](https://hf.co/openai/gpt-oss-120b)), and a smaller one with 20B parameters ([gpt-oss-20b](https://hf.co/openai/gpt-oss-20b)).
7+
This guide walks you through using OpenAI's latest gpt-oss models with Hugging Face Inference Providers, which is the same infra that powers the official OpenAI playground ([gpt-oss.com](https://gpt-oss.com)). OpenAI gpt-oss is an open-weights family built for strong reasoning, agentic workflows and versatile developer use cases, and it comes in two sizes: a version with 120B parameters [gpt-oss-120b](https://hf.co/openai/gpt-oss-120b), and a smaller one with 20B parameters ([gpt-oss-20b](https://hf.co/openai/gpt-oss-20b)).
88

99
Both models are supported on Inference Providers and can be accessed through either the OpenAI-compatible [Chat Completions API](https://platform.openai.com/docs/api-reference/chat/completions), or the more advanced [Responses API](https://platform.openai.com/docs/api-reference/responses).
1010

@@ -39,7 +39,7 @@ npm install openai
3939
</hfoptions>
4040

4141
## Chat Completion
42-
Getting started with GPT OSS models on Inference Providers is simple and straightforward. The OpenAI-compatible Chat Completions API supports features like tool calling, structured outputs, streaming, and reasoning effort controls.
42+
Getting started with gpt-oss models on Inference Providers is simple and straightforward. The OpenAI-compatible Chat Completions API supports features like tool calling, structured outputs, streaming, and reasoning effort controls.
4343

4444
Here's a basic example using [gpt-oss-120b](https://hf.co/openai/gpt-oss-120b) through the fast Cerebras provider:
4545

@@ -282,7 +282,7 @@ console.log(parsedOutput);
282282
</hfoption>
283283
</hfoptions>
284284

285-
With just a few lines of code, you can start using GPT OSS models with Hugging Face Inference Providers, fully OpenAI API-compatible, easy to integrate, and ready out of the box!
285+
With just a few lines of code, you can start using gpt-oss models with Hugging Face Inference Providers, fully OpenAI API-compatible, easy to integrate, and ready out of the box!
286286

287287
## Responses API
288288

@@ -301,7 +301,7 @@ The implementation is based on the open-source [huggingface/responses.js](https:
301301

302302
### Stream responses
303303

304-
Unlike traditional text streaming, the Responses API uses a system of semantic events for streaming. This means the stream is not just raw text, but a series of structured event objects. Each event has a type, so you can listen for the specific events you care about, such as content being added (`output_text.delta`) or the message being completed (`completed). The example below shows how to iterate through these events and print the content as it arrives.
304+
Unlike traditional text streaming, the Responses API uses a system of semantic events for streaming. This means the stream is not just raw text, but a series of structured event objects. Each event has a type, so you can listen for the specific events you care about, such as content being added (`output_text.delta`) or the message being completed (`completed`). The example below shows how to iterate through these events and print the content as it arrives.
305305

306306
<hfoptions id="stream">
307307
<hfoption id="python">
@@ -566,5 +566,5 @@ response.output.forEach((item, index) => {
566566
</hfoption>
567567
</hfoptions>
568568

569-
That's it! With the Responses API on Inference Providers, you get fine-grained control over powerful open-weight models like GPT OSS, including streaming, tool calling, and remote MCP, making it ideal for building reliable, agent-driven applications.
569+
That's it! With the Responses API on Inference Providers, you get fine-grained control over powerful open-weight models like gpt-oss, including streaming, tool calling, and remote MCP, making it ideal for building reliable, agent-driven applications.
570570

docs/inference-providers/providers/cerebras.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
5050

5151
<InferenceSnippet
5252
pipeline=text-generation
53-
providersMapping={ {"cerebras":{"modelId":"Qwen/Qwen3-Coder-480B-A35B-Instruct","providerModelId":"qwen-3-coder-480b"} } }
53+
providersMapping={ {"cerebras":{"modelId":"openai/gpt-oss-120b","providerModelId":"gpt-oss-120b"} } }
5454
conversational />
5555

5656

0 commit comments

Comments
 (0)