We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 3bcaaaf commit cadb213Copy full SHA for cadb213
README.md
@@ -160,4 +160,4 @@ Thanks to:
160
* Lightning AI for supporting pytorch and work in flash attention, int8 quantization, and LoRA fine-tuning.
161
* GGML for driving forward fast, on device inference of LLMs
162
* Karpathy for spearheading simple, interpretable and fast LLM implementations
163
-* MLC-LLM for pushing 4-bit quantization performance on heterogenous hardware
+* MLC-LLM for pushing 4-bit quantization performance on heterogeneous hardware
0 commit comments