You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2025-05-15-optimized-inference-processors.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ meta_description: Learn how to optimize inference processors in OpenSearch to re
15
15
16
16
Inference processors, such as `text_embedding`, `text_image_embedding`, and `sparse_encoding`, enable the generation of vector embeddings during document ingestion or updates. Today, these processors invoke model inference every time a document is ingested or updated, even if the embedding source fields remain unchanged. This can lead to unnecessary compute usage and increased costs.
17
17
18
-
This blog post introduces a new inference processor optimization that reduces redundant inference calls, reducing costs and improving overall performance.
18
+
This blog post introduces a new inference processor optimization that reduces redundant inference calls, lowering costs and improving overall performance.
0 commit comments