Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 17 additions & 27 deletions docs/source/inference_tutorials/sentence_transformers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,59 +24,49 @@ This guide explains how to compile, load, and use [Sentence Transformers (SBERT)

### Convert Sentence Transformers model to AWS Inferentia2

First, you need to convert your Sentence Transformers model to a format compatible with AWS Inferentia2. You can compile Sentence Transformers models with Optimum Neuron using the `optimum-cli` or `NeuronModelForSentenceTransformers` class. Below you will find an example for both approaches. We have to make sure `sentence-transformers` is installed. That's only needed for exporting the model.
First, you need to convert your Sentence Transformers model to a format compatible with AWS Inferentia2. You can compile Sentence Transformers models with Optimum Neuron using the `optimum-cli` or `NeuronSentenceTransformers` class. Below you will find an example for both approaches. We have to make sure `sentence-transformers` is installed. That's only needed for exporting the model.

```bash
pip install sentence-transformers
```

Here we will use the `NeuronModelForSentenceTransformers`, which can be used to convert any Sentence Transformers model to a format compatible with AWS Inferentia2 or load already converted models. When exporting models with the `NeuronModelForSentenceTransformers` you need to set `export=True` and define the input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`.
Here we will use the `NeuronSentenceTransformers`, which can be used to convert any Sentence Transformers model to a format compatible with AWS Inferentia2 or load already converted models. When exporting models with the `NeuronSentenceTransformers` you need to set `export=True` and define the input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`.

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from optimum.neuron import NeuronSentenceTransformers

# Sentence Transformers model from HuggingFace
model_id = "BAAI/bge-small-en-v1.5"
input_shapes = {"batch_size": 1, "sequence_length": 384} # mandatory shapes

# Load Transformers model and export it to AWS Inferentia2
model = NeuronModelForSentenceTransformers.from_pretrained(model_id, export=True, **input_shapes)
model = NeuronSentenceTransformers.from_pretrained(model_id, export=True, **input_shapes)

# Save model to disk
model.save_pretrained("bge_emb_inf2/")
```

Here we will use the `optimum-cli` to convert the model. Similar to the `NeuronModelForSentenceTransformers` we need to define our input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`. The `optimum-cli` will automatically convert the model to a format compatible with AWS Inferentia2 and save it to the specified output directory.
Here we will use the `optimum-cli` to convert the model. Similar to the `NeuronSentenceTransformers` we need to define our input shape and batch size. The input shape is defined by the `sequence_length` and the batch size by `batch_size`. The `optimum-cli` will automatically convert the model to a format compatible with AWS Inferentia2 and save it to the specified output directory.

```bash
optimum-cli export neuron -m BAAI/bge-small-en-v1.5 --sequence_length 384 --batch_size 1 --task feature-extraction bge_emb_inf2/
```

### Load compiled Sentence Transformers model and run inference

Once we have a compiled Sentence Transformers model, which we either exported ourselves or is available on the Hugging Face Hub, we can load it and run inference. For loading the model we can use the `NeuronModelForSentenceTransformers` class, which is an abstraction layer for the `SentenceTransformer` class. The `NeuronModelForSentenceTransformers` class will automatically pad the input to the specified `sequence_length` and run inference on AWS Inferentia2.
Once we have a compiled Sentence Transformers model, which we either exported ourselves or is available on the Hugging Face Hub, we can load it and run inference. For loading the model we can use the `NeuronSentenceTransformers` class, which is an abstraction layer for the `SentenceTransformer` class. The `NeuronSentenceTransformers` class will automatically pad the input to the specified `sequence_length` and run inference on AWS Inferentia2.

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from transformers import AutoTokenizer
from optimum.neuron import NeuronSentenceTransformers

model_id_or_path = "bge_emb_inf2/"
tokenizer_id = "BAAI/bge-small-en-v1.5"

# Load model and tokenizer
model = NeuronModelForSentenceTransformers.from_pretrained(model_id_or_path)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
model = NeuronSentenceTransformers.from_pretrained(model_id_or_path)

# Run inference
prompt = "I like to eat apples"
encoded_input = tokenizer(prompt, return_tensors='pt')
outputs = model(**encoded_input)

token_embeddings = outputs.token_embeddings
sentence_embedding = outputs.sentence_embedding

print(f"token embeddings: {token_embeddings.shape}") # torch.Size([1, 7, 384])
print(f"sentence_embedding: {sentence_embedding.shape}") # torch.Size([1, 384])
token_embeddings = model.encode(output_value="token_embeddings")
sentence_embedding = model.encode(output_value="sentence_embedding")
```

### Production Usage
Expand All @@ -89,18 +79,18 @@ For deploying these models in a production environment, refer to the [Amazon Sag

### Compile CLIP for AWS Inferentia2

You can compile CLIP models with Optimum Neuron either by using the `optimum-cli` or `NeuronModelForSentenceTransformers` class. Adopt one approach that you prefer:
You can compile CLIP models with Optimum Neuron either by using the `optimum-cli` or `NeuronSentenceTransformers` class. Adopt one approach that you prefer:

* With the Optimum CLI

```bash
optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_length 64 --text_batch_size 3 --image_batch_size 1 --num_channels 3 --height 224 --width 224 --task feature-extraction --subfolder 0_CLIPModel clip_emb/
```

* With the `NeuronModelForSentenceTransformers` class
* With the `NeuronSentenceTransformers` class

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from optimum.neuron import NeuronSentenceTransformers

model_id = "sentence-transformers/clip-ViT-B-32"

Expand All @@ -114,7 +104,7 @@ input_shapes = {
"sequence_length": 64,
}

emb_model = NeuronModelForSentenceTransformers.from_pretrained(
emb_model = NeuronSentenceTransformers.from_pretrained(
model_id, subfolder="0_CLIPModel", export=True, library_name="sentence_transformers", dynamic_batch_size=False, **input_shapes
)

Expand All @@ -130,10 +120,10 @@ from PIL import Image
from sentence_transformers import util
from transformers import CLIPProcessor

from optimum.neuron import NeuronModelForSentenceTransformers
from optimum.neuron import NeuronSentenceTransformers

save_directory = "clip_emb"
emb_model = NeuronModelForSentenceTransformers.from_pretrained(save_directory)
emb_model = NeuronSentenceTransformers.from_pretrained(save_directory)

processor = CLIPProcessor.from_pretrained(save_directory)
inputs = processor(
Expand All @@ -154,7 +144,7 @@ print(cos_scores)

**Caveat**

Since compiled models with dynamic batching enabled only accept input tensors with the same batch size, we cannot set `dynamic_batch_size=True` if the input texts and images have different batch sizes. And as `NeuronModelForSentenceTransformers` class pads the inputs to the batch sizes (`text_batch_size` and `image_batch_size`) used during the compilation, you could use relatively larger batch sizes during the compilation for flexibility with the trade-off of compute.
Since compiled models with dynamic batching enabled only accept input tensors with the same batch size, we cannot set `dynamic_batch_size=True` if the input texts and images have different batch sizes. And as `NeuronSentenceTransformers` class pads the inputs to the batch sizes (`text_batch_size` and `image_batch_size`) used during the compilation, you could use relatively larger batch sizes during the compilation for flexibility with the trade-off of compute.

eg. if you want to encode 3 or 4 or 5 texts and 1 image, you could set `text_batch_size = 5 = max(3, 4, 5)` and `image_batch_size = 1` during the compilation.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/model_doc/modeling_auto.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ The following Neuron model classes are available for natural language processing

[[autodoc]] modeling.NeuronModelForFeatureExtraction

### NeuronModelForSentenceTransformers
### NeuronSentenceTransformers

[[autodoc]] modeling.NeuronModelForSentenceTransformers
[[autodoc]] modeling_sentence_transformers.NeuronSentenceTransformers

### NeuronModelForMaskedLM

Expand Down
19 changes: 10 additions & 9 deletions docs/source/model_doc/sentence_transformers/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,16 +39,16 @@ optimum-cli export neuron -m sentence-transformers/clip-ViT-B-32 --sequence_leng
* Example - Text embeddings

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from optimum.neuron import NeuronSentenceTransformers

# configs for compiling model
input_shapes = {
"batch_size": 1,
"sequence_length": 384,
"sequence_length": 512,
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
neuron_model = NeuronSentenceTransformers.from_pretrained(
"BAAI/bge-large-en-v1.5",
export=True,
**input_shapes,
Expand All @@ -63,12 +63,17 @@ neuron_model.push_to_hub(
"bge_emb_neuron/", repository_id="optimum/bge-base-en-v1.5-neuronx" # Replace with your HF Hub repo id
)

sentences_1 = ["Life is pain au chocolat", "Life is galette des rois"]
sentences_2 = ["Life is eclaire au cafe", "Life is mille feuille"]
embeddings_1 = neuron_model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = neuron_model.encode(sentences_2, normalize_embeddings=True)
similarity = neuron_model.similarity(embeddings_1, embeddings_2)
```

* Example - Image Search

```python
from optimum.neuron import NeuronModelForSentenceTransformers
from optimum.neuron import NeuronSentenceTransformers

# configs for compiling model
input_shapes = {
Expand All @@ -81,7 +86,7 @@ input_shapes = {
}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}

neuron_model = NeuronModelForSentenceTransformers.from_pretrained(
neuron_model = NeuronSentenceTransformers.from_pretrained(
"sentence-transformers/clip-ViT-B-32",
subfolder="0_CLIPModel",
export=True,
Expand All @@ -98,7 +103,3 @@ neuron_model.push_to_hub(
"clip_emb_neuron/", repository_id="optimum/clip_vit_emb_neuronx" # Replace with your HF Hub repo id
)
```

## NeuronModelForSentenceTransformers

[[autodoc]] modeling.NeuronModelForSentenceTransformers
2 changes: 1 addition & 1 deletion optimum/commands/neuron/cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ def _list_entries(self):
str(entry["batch_size"]),
str(entry["sequence_length"]),
str(entry.get("tp_degree", entry.get("tensor_parallel_size"))),
str(entry["torch_dtype"]),
str(entry.get("torch_dtype", entry.get("dtype"))),
str(entry["target"]),
)
)
Expand Down
4 changes: 2 additions & 2 deletions optimum/neuron/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@
"modeling_traced": ["NeuronTracedModel"],
"modeling": [
"NeuronModelForFeatureExtraction",
"NeuronModelForSentenceTransformers",
"NeuronModelForMaskedLM",
"NeuronModelForQuestionAnswering",
"NeuronModelForSequenceClassification",
Expand Down Expand Up @@ -78,6 +77,7 @@
"modeling_seq2seq": [
"NeuronModelForSeq2SeqLM",
],
"modeling_sentence_transformers": ["NeuronSentenceTransformers"],
"models": [],
"accelerate": [
"NeuronAccelerator",
Expand Down Expand Up @@ -115,7 +115,6 @@
NeuronModelForObjectDetection,
NeuronModelForQuestionAnswering,
NeuronModelForSemanticSegmentation,
NeuronModelForSentenceTransformers,
NeuronModelForSequenceClassification,
NeuronModelForTokenClassification,
NeuronModelForXVector,
Expand All @@ -138,6 +137,7 @@
NeuronStableDiffusionXLInpaintPipeline,
NeuronStableDiffusionXLPipeline,
)
from .modeling_sentence_transformers import NeuronSentenceTransformers
from .modeling_seq2seq import NeuronModelForSeq2SeqLM
from .modeling_traced import NeuronTracedModel

Expand Down
2 changes: 1 addition & 1 deletion optimum/neuron/cache/hub_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@ def select_hub_cached_entries(
continue
if torch_dtype is not None:
target_value = DTYPE_MAPPER.pt(torch_dtype) if isinstance(torch_dtype, str) else torch_dtype
entry_value = DTYPE_MAPPER.pt(entry.get("torch_dtype"))
entry_value = DTYPE_MAPPER.pt(entry.get("torch_dtype", entry.get("dtype")))
if target_value != entry_value:
continue
selected.append(entry)
Expand Down
73 changes: 0 additions & 73 deletions optimum/neuron/modeling.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
"""NeuronModelForXXX classes for inference on neuron devices using the same API as Transformers."""

import logging
from typing import TYPE_CHECKING

import torch
from transformers import (
Expand Down Expand Up @@ -66,8 +65,6 @@
NEURON_OBJECT_DETECTION_EXAMPLE,
NEURON_QUESTION_ANSWERING_EXAMPLE,
NEURON_SEMANTIC_SEGMENTATION_EXAMPLE,
NEURON_SENTENCE_TRANSFORMERS_IMAGE_EXAMPLE,
NEURON_SENTENCE_TRANSFORMERS_TEXT_EXAMPLE,
NEURON_SEQUENCE_CLASSIFICATION_EXAMPLE,
NEURON_TEXT_INPUTS_DOCSTRING,
NEURON_TOKEN_CLASSIFICATION_EXAMPLE,
Expand All @@ -76,10 +73,6 @@
)


if TYPE_CHECKING:
pass


logger = logging.getLogger(__name__)


Expand Down Expand Up @@ -135,72 +128,6 @@ def forward(
return BaseModelOutputWithPooling(last_hidden_state=last_hidden_state, pooler_output=pooler_output)


@add_start_docstrings(
"""
Neuron Model for Sentence Transformers.
""",
NEURON_MODEL_START_DOCSTRING,
)
class NeuronModelForSentenceTransformers(NeuronTracedModel):
"""
Sentence Transformers model on Neuron devices.
"""

auto_model_class = AutoModel
library_name = "sentence_transformers"

@add_start_docstrings_to_model_forward(
NEURON_TEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length")
+ NEURON_SENTENCE_TRANSFORMERS_TEXT_EXAMPLE.format(
processor_class=_TOKENIZER_FOR_DOC,
model_class="NeuronModelForSentenceTransformers",
checkpoint="optimum/bge-base-en-v1.5-neuronx",
)
+ NEURON_SENTENCE_TRANSFORMERS_IMAGE_EXAMPLE.format(
processor_class=_GENERIC_PROCESSOR,
model_class="NeuronModelForSentenceTransformers",
checkpoint="optimum/clip_vit_emb_neuronx",
)
)
def forward(
self,
input_ids: torch.Tensor,
attention_mask: torch.Tensor,
pixel_values: torch.Tensor | None = None,
token_type_ids: torch.Tensor | None = None,
**kwargs,
):
model_type = self.config.neuron["model_type"]
neuron_inputs = {"input_ids": input_ids}
if pixel_values is not None:
neuron_inputs["pixel_values"] = pixel_values
neuron_inputs["attention_mask"] = (
attention_mask # The input order for clip is: input_ids, pixel_values, attention_mask.
)

with self.neuron_padding_manager(neuron_inputs) as inputs:
outputs = self.model(*inputs)
if "clip" in model_type:
text_embeds = self.remove_padding([outputs[0]], dims=[0], indices=[input_ids.shape[0]])[
0
] # Remove padding on batch_size(0)
image_embeds = self.remove_padding([outputs[1]], dims=[0], indices=[pixel_values.shape[0]])[
0
] # Remove padding on batch_size(0)
return ModelOutput(text_embeds=text_embeds, image_embeds=image_embeds)
else:
# token_embeddings -> (batch_size, sequencen_len, hidden_size)
token_embeddings = self.remove_padding(
[outputs[0]], dims=[0, 1], indices=[input_ids.shape[0], input_ids.shape[1]]
)[0] # Remove padding on batch_size(0), and sequence_length(1)
# sentence_embedding -> (batch_size, hidden_size)
sentence_embedding = self.remove_padding([outputs[1]], dims=[0], indices=[input_ids.shape[0]])[
0
] # Remove padding on batch_size(0)

return ModelOutput(token_embeddings=token_embeddings, sentence_embedding=sentence_embedding)


@add_start_docstrings(
"""
Neuron Model with a MaskedLMOutput for masked language modeling tasks.
Expand Down
Loading