Skip to content

Commit 8580e12

Browse files
committed
Remove Dual RoPE caches info and add litert-torch to README
1 parent 2b0584f commit 8580e12

File tree

1 file changed

+2
-2
lines changed
  • litert_torch/generative/examples/embedding_gemma

1 file changed

+2
-2
lines changed

litert_torch/generative/examples/embedding_gemma/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ EmbeddingGemma-300M is an encoder-only model with 24 layers. It uses a combinati
1414
To run this example and verify the results, you need the following packages:
1515

1616
```bash
17-
pip install transformers sentence-transformers safetensors
17+
pip install litert-torch transformers sentence-transformers safetensors
1818
```
1919

2020
## Convert to TFLite
@@ -49,4 +49,4 @@ python verify.py
4949
--prompts="This is an example sentence."
5050
```
5151

52-
The verification script compares the final embeddings produced by the original `sentence-transformers` implementation and the reauthored `litert_torch` implementation to ensure parity.
52+
The verification script compares the final embeddings produced by the original `sentence-transformers` implementation and the reauthored `litert_torch` implementation to ensure parity.

0 commit comments

Comments
 (0)