Skip to content

Commit 1f36bd4

Browse files
docs: fix link text
1 parent 5d6dfe6 commit 1f36bd4

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/integrations.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
With Transformers it's very easy to load any model in 4 or 8-bit, quantizing them on the fly with bitsandbytes primitives.
44

5-
Please review the [bitsandbytes section in the Accelerate docs](https://huggingface.co/docs/transformers/v4.37.2/en/quantization#bitsandbytes).
5+
Please review the [bitsandbytes section in the Transformers docs](https://huggingface.co/docs/transformers/v4.37.2/en/quantization#bitsandbytes).
66

77
Details about the BitsAndBytesConfig can be found [here](https://huggingface.co/docs/transformers/v4.37.2/en/main_classes/quantization#transformers.BitsAndBytesConfig).
88

@@ -21,7 +21,7 @@ quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dty
2121
# PEFT
2222
With `PEFT`, you can use QLoRA out of the box with `LoraConfig` and a 4-bit base model.
2323

24-
Please review the [bitsandbytes section in the Accelerate docs](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model).
24+
Please review the [bitsandbytes section in the PEFT docs](https://huggingface.co/docs/peft/developer_guides/quantization#quantize-a-model).
2525

2626
# Accelerate
2727

0 commit comments

Comments
 (0)