Skip to content

Commit a4b3bca

Browse files
GPTQ blogpost - Fix ToC (#1416)
1 parent ffc6386 commit a4b3bca

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

gptq-integration.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,12 @@ This integration is available both for Nvidia GPUs, and RoCm-powered AMD GPUs.
2727
## Table of contents
2828

2929
- [Resources](#resources)
30-
- [**A gentle summary of the GPTQ paper**](#--a-gentle-summary-of-the-gptq-paper--)
31-
- [AutoGPTQ library – the one-stop library for efficiently leveraging GPTQ for LLMs](#autogptq-library---the-one-stop-library-for-efficiently-leveraging-gptq-for-llms)
32-
- [Native support of GPTQ models in 🤗 Transformers](#native-support-of-gptq-models-in----transformers)
33-
- [Quantizing models **with the Optimum library**](#quantizing-models---with-the-optimum-library--)
34-
- [Running GPTQ models through ***Text-Generation-Inference***](#running-gptq-models-through----text-generation-inference---)
35-
- [**Fine-tune quantized models with PEFT**](#--fine-tune-quantized-models-with-peft--)
30+
- [**A gentle summary of the GPTQ paper**](#a-gentle-summary-of-the-gptq-paper)
31+
- [AutoGPTQ library – the one-stop library for efficiently leveraging GPTQ for LLMs](#autogptq-library--the-one-stop-library-for-efficiently-leveraging-gptq-for-llms)
32+
- [Native support of GPTQ models in 🤗 Transformers](#native-support-of-gptq-models-in--transformers)
33+
- [Quantizing models **with the Optimum library**](#quantizing-models-with-the-optimum-library)
34+
- [Running GPTQ models through ***Text-Generation-Inference***](#running-gptq-models-through-text-generation-inference)
35+
- [**Fine-tune quantized models with PEFT**](#fine-tune-quantized-models-with-peft)
3636
- [Room for improvement](#room-for-improvement)
3737
* [Supported models](#supported-models)
3838
- [Conclusion and final words](#conclusion-and-final-words)

0 commit comments

Comments
 (0)