Skip to content

Commit 7ac1ba7

Browse files
mergennachinfacebook-github-bot
authored andcommitted
Add animated gif for Llama3.2 1B bf16
Differential Revision: D63420847
1 parent 984986e commit 7ac1ba7

File tree

2 files changed

+12
-4
lines changed

2 files changed

+12
-4
lines changed
3.81 MB
Loading

examples/models/llama2/README.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,21 @@ Please note that the models are subject to the [Llama 2 Acceptable Use Policy](h
2424

2525
Since Llama 2 7B or Llama 3 8B model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
2626

27-
<p align="center">
28-
<img src="./llama_via_xnnpack.gif" width=300>
27+
<table>
28+
<tr>
29+
<td>
30+
<img src="./llama_via_xnnpack.gif" width="300">
2931
<br>
3032
<em>
31-
Running Llama3.1 8B on Android phone
33+
Llama3.1 8B, 4bit quantized on Android phone
3234
</em>
33-
</p>
35+
</td>
36+
<td><img src="./Android3_2_1B_bf16.gif" width="300">
37+
<br>
38+
<em> Llama3.2 1B, unquantized, bf16 on Android phone. </em>
39+
</td>
40+
</tr>
41+
</table>
3442

3543
## Quantization:
3644
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch/ao).

0 commit comments

Comments
 (0)