Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions .github/workflows/docs-quality.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
name: Docs Quality

on:
pull_request:
branches: ["**"]
paths:
- "**/*.md"
- ".markdownlint.jsonc"
- ".lychee.toml"
- ".github/workflows/docs-quality.yml"
workflow_dispatch: {}

jobs:
markdownlint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: markdownlint
uses: DavidAnson/markdownlint-cli2-action@v16
with:
globs: |
**/*.md
config: .markdownlint.jsonc
ignore: |
**/node_modules/**
**/.git/**

link-check:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Link Check
uses: lycheeverse/lychee-action@v1
with:
args: >-
--config .lychee.toml
--no-progress
--max-concurrency 8
--retry-wait-time 2
--verbose
**/*.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
19 changes: 19 additions & 0 deletions .lychee.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Lychee link checker configuration
verbose = true
max_concurrency = 8
retry_wait_time = 2

# Treat 429/5xx as transient
exclude_status = [429, 500, 502, 503, 504]

# Ignore anchors generated dynamically or HF playground links that may redirect
exclude = [
"https://huggingface.co/playground",
"https://hf.co/playground",
"https://hf.co/subscribe/pro",
"https://huggingface.co/enterprise",
"https://router.huggingface.co/*"
]

# Allow insecure TLS for some provider docs that may have intermittent cert chains
insecure = true
11 changes: 11 additions & 0 deletions .markdownlint.jsonc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
// Extendable configuration for markdownlint-cli2
"config": {
"MD013": { "line_length": 120, "code_blocks": false, "ignore_code_languages": ["bash", "sh", "powershell"] },
"MD033": false,
"MD041": false,
"MD024": { "siblings_only": true },
"MD025": { "level": 1 },
"MD007": { "indent": 2 }
}
}
73 changes: 73 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Contributing

Thanks for your interest in contributing to `hub-docs`! We welcome improvements to documentation, examples, scripts, and small fixes — perfect for Hacktoberfest.

## Getting Started

- **Fork** this repository and create a new branch for your changes.
- **Make focused edits** (typos, clarifications, examples, links, structure) and keep PRs small and scoped.
- Follow the existing style and structure of the docs. Keep headings concise and code samples runnable.

## Local Preview (Hub docs)

You can preview the Hub docs locally with `hf-doc-builder`.

```bash
pip install hf-doc-builder
pip install black watchdog

# Preview the Hub docs subtree
doc-builder preview hub PATH/TO/hub-docs/docs/hub/ --not_python_module
```

Note: This repository contains several doc sections (Hub, Inference Providers, SageMaker, Xet). Most contributions only require editing Markdown, opening a PR, and letting CI build a preview for review.

## Inference Providers docs generator (optional)

If you are updating the Inference Providers docs generated content under `docs/inference-providers`, you can use the generator in `scripts/inference-providers/`.

```bash
cd scripts/inference-providers
pnpm install
pnpm run generate
```

This will regenerate provider/task pages from the Handlebars templates in `scripts/inference-providers/templates/`.

## Docs quality checks

We run Markdown linting and link checking in CI for all PRs.

- Markdown rules are configured in `.markdownlint.jsonc` and executed by a GitHub Action.
- External links are validated by Lychee with `.lychee.toml`.

You can run similar checks locally:

```bash
# Markdown lint (via npx)
npx markdownlint-cli2 "**/*.md" --config .markdownlint.jsonc --ignore "**/node_modules/**" --ignore "**/.git/**"

# Link check (install lychee locally)
cargo install lychee
lychee --config .lychee.toml **/*.md
```

## Commit and PR guidelines

- Use **descriptive commit messages**.
- Link to relevant issues if applicable.
- For documentation changes, include before/after context when helpful.
- Ensure external links are valid and examples run (when feasible).

## Hacktoberfest

Small improvements are highly valued:

- Fix typos/grammar and clarify ambiguous wording
- Add missing examples or fix broken snippets
- Improve instructions for local preview and tooling
- Keep each PR limited to a single topic

Thank you for helping improve the docs!


16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,19 @@ pip install black watchdog
# run `doc-builder preview` cmd
doc-builder preview hub {YOUR_PATH}/hub-docs/docs/hub/ --not_python_module
```

### Inference Providers docs generator (optional)

This repo includes a generator to update pages under `docs/inference-providers/` from templates. To use it:

```bash
cd scripts/inference-providers
pnpm install
pnpm run generate
```

Templates live under `scripts/inference-providers/templates/` and are compiled by `scripts/inference-providers/scripts/generate.ts`.

### Contributing

See `CONTRIBUTING.md` for guidelines (Hacktoberfest-friendly!), local preview instructions, and PR tips.
12 changes: 6 additions & 6 deletions docs/inference-providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ When you build AI applications, it's tough to manage multiple provider APIs, com

Here's what you can build:

- **Text Generation**: Use Large language models with tool-calling capabilities for chatbots, content generation, and code assistance
- **Text Generation**: Use large language models with tool-calling capabilities for chatbots, content generation, and code assistance
- **Image and Video Generation**: Create custom images and videos, including support for LoRAs and style customization
- **Search & Retrieval**: State-of-the-art embeddings for semantic search, RAG systems, and recommendation engines
- **Traditional ML Tasks**: Ready-to-use models for classification, NER, summarization, and speech recognition
Expand Down Expand Up @@ -73,7 +73,7 @@ Before diving into integration, explore models interactively with our [Inference

### Authentication

You'll need a Hugging Face token to authenticate your requests. Create one by visiting your [token settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) and generating a `fine-grained` token with `Make calls to Inference Providers` permissions.
You'll need a Hugging Face token to authenticate your requests. Create one by visiting your [token settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) and generating a finegrained token with "Make calls to Inference Providers" permissions.

For complete token management details, see our [security tokens guide](https://huggingface.co/docs/hub/en/security-tokens).

Expand All @@ -100,7 +100,7 @@ pip install huggingface_hub
hf auth login # get a read token from hf.co/settings/tokens
```

You can now use the the client with a Python interpreter:
You can now use the client with a Python interpreter:

```python
import os
Expand Down Expand Up @@ -197,7 +197,7 @@ Install with NPM:
npm install @huggingface/inference
```

Then use the client with Javascript:
Then use the client with JavaScript:

```js
import { InferenceClient } from "@huggingface/inference";
Expand Down Expand Up @@ -248,7 +248,7 @@ console.log(completion.choices[0].message.content);

<hfoption id="fetch">

For lightweight applications or custom implementations, use our REST API directly with standard fetch.
For lightweight applications or custom implementations, use our REST API directly with `fetch`.

Our routing system automatically selects the most popular available provider for your chosen model. You can also select the provider of your choice by appending it to the model id (e.g. `"deepseek-ai/DeepSeek-V3-0324:fireworks-ai"`).

Expand Down Expand Up @@ -283,7 +283,7 @@ console.log(await response.json());

#### HTTP / cURL

For testing, debugging, or integrating with any HTTP client, here's the raw REST API format.
For testing, debugging, or integrating with any HTTP client, here's the raw REST API format:
Our routing system automatically selects the most popular available provider for your chosen model. You can also select the provider of your choice by appending it to the model id (e.g. `"deepseek-ai/DeepSeek-V3-0324:fireworks-ai"`).

```bash
Expand Down