Skip to content

Commit b65b4df

Browse files
committed
Character change
Signed-off-by: Fanit Kolchina <[email protected]>
1 parent 0b03178 commit b65b4df

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2025-05-15-optimized-inference-processors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,7 @@ The following table presents the benchmarking test results for the `sparse_encod
228228

229229
As demonstrated by the cost and performance results, the `skip_existing` optimization significantly reduces redundant inference operations, which translates to lower costs and improved system performance. By reusing existing embeddings when input fields remain unchanged, ingest pipelines can process updates faster and more efficiently. This strategy improves system performance, enhances scalability, and delivers more cost-effective embedding retrieval at scale.
230230

231-
## Whats next
231+
## What's next
232232

233233
If you use the Bulk API with ingest pipelines, it's important to understand how different operations behave.
234234

0 commit comments

Comments
 (0)