Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added examples/models/llama2/Android3_2_1B_bf16.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 12 additions & 4 deletions examples/models/llama2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,21 @@ Please note that the models are subject to the [Llama 2 Acceptable Use Policy](h

Since Llama 2 7B or Llama 3 8B model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.

<p align="center">
<img src="./llama_via_xnnpack.gif" width=300>
<table>
<tr>
<td>
<img src="./llama_via_xnnpack.gif" width="300">
<br>
<em>
Running Llama3.1 8B on Android phone
Llama3.1 8B, 4bit quantized on Android phone
</em>
</p>
</td>
<td><img src="./Android3_2_1B_bf16.gif" width="300">
<br>
<em> Llama3.2 1B, unquantized, bf16 on Android phone. </em>
</td>
</tr>
</table>

## Quantization:
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch/ao).
Expand Down
Loading