Skip to content

Commit a03df43

Browse files
authored
Lit-GPT integration docs (#1089)
* lit-gpt integration * mention PT lightning
1 parent 1f36bd4 commit a03df43

File tree

1 file changed

+19
-0
lines changed

1 file changed

+19
-0
lines changed

docs/source/integrations.mdx

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,25 @@ Bitsandbytes is also easily usable from within Accelerate.
2929

3030
Please review the [bitsandbytes section in the Accelerate docs](https://huggingface.co/docs/accelerate/en/usage_guides/quantization).
3131

32+
33+
34+
# PyTorch Lightning and Lightning Fabric
35+
36+
Bitsandbytes is available from within both
37+
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/), a deep learning framework for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale;
38+
- and [Lightning Fabric](https://lightning.ai/docs/fabric/stable/), a fast and lightweight way to scale PyTorch models without boilerplate).
39+
40+
Please review the [bitsandbytes section in the PyTorch Lightning docs](https://lightning.ai/docs/pytorch/stable/common/precision_intermediate.html#quantization-via-bitsandbytes).
41+
42+
43+
# Lit-GPT
44+
45+
Bitsandbytes is integrated into [Lit-GPT](https://github.com/Lightning-AI/lit-gpt), a hackable implementation of state-of-the-art open-source large language models, based on Lightning Fabric, where it can be used for quantization during training, finetuning, and inference.
46+
47+
Please review the [bitsandbytes section in the Lit-GPT quantization docs](https://github.com/Lightning-AI/lit-gpt/blob/main/tutorials/quantize.md).
48+
49+
50+
3251
# Trainer for the optimizers
3352

3453
You can use any of the 8-bit and/or paged optimizers by simple passing them to the `transformers.Trainer` class on initialization.All bnb optimizers are supported by passing the correct string in `TrainingArguments`'s `optim` attribute - e.g. (`paged_adamw_32bit`).

0 commit comments

Comments
 (0)