Skip to content

Commit bf2f7b1

Browse files
authored
fix(docs): fixed a broken link to quantization guide (#1014)
quantization guide link pointed to root /guides/dtype which would send the user all the way to the root of the url. Clicking on the link from https://huggingface.co/docs/transformers.js/index took you to https://huggingface.co/guides/dtypes Now takes you to the right page https://huggingface.co/docs/transformers.js/guides/dtypes
1 parent bd839b9 commit bf2f7b1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/snippets/1_quick-tour.snippet

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ In resource-constrained environments, such as web browsers, it is advisable to u
6262
the model to lower bandwidth and optimize performance. This can be achieved by adjusting the `dtype` option,
6363
which allows you to select the appropriate data type for your model. While the available options may vary
6464
depending on the specific model, typical choices include `"fp32"` (default for WebGPU), `"fp16"`, `"q8"`
65-
(default for WASM), and `"q4"`. For more information, check out the [quantization guide](/guides/dtypes).
65+
(default for WASM), and `"q4"`. For more information, check out the [quantization guide](../guides/dtypes).
6666
```javascript
6767
// Run the model at 4-bit quantization
6868
const pipe = await pipeline('sentiment-analysis', 'Xenova/distilbert-base-uncased-finetuned-sst-2-english', {

0 commit comments

Comments
 (0)