Skip to content

Commit 00bdb79

Browse files
Update _posts/2025-05-15-optimized-inference-processors.md
Co-authored-by: Nathan Bower <[email protected]> Signed-off-by: Will Hwang <[email protected]>
1 parent abbf2fa commit 00bdb79

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2025-05-15-optimized-inference-processors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ meta_description: Learn how to optimize inference processors in OpenSearch to re
1515

1616
Inference processors, such as `text_embedding`, `text_image_embedding`, and `sparse_encoding`, enable the generation of vector embeddings during document ingestion or updates. Today, these processors invoke model inference every time a document is ingested or updated, even if the embedding source fields remain unchanged. This can lead to unnecessary compute usage and increased costs.
1717

18-
This blog post introduces a new inference processor optimization that reduces redundant inference calls, reducing costs and improving overall performance.
18+
This blog post introduces a new inference processor optimization that reduces redundant inference calls, lowering costs and improving overall performance.
1919

2020
## How the optimization works
2121

0 commit comments

Comments
 (0)