Skip to content

Commit b69b60b

Browse files
shubhaatkilfoyle
andauthored
Explain VCU consumption better (#2602)
Explain Ingest, Search, ML VCU consumption better. I see this confusion on - Are search VCUs directly related to number of searches? Are ingest VCUs directly related to number of ingest operations? I have added info to break down what search/ingest/ML VCU consumption is driven by. --------- Co-authored-by: David Kilfoyle <[email protected]>
1 parent fed5507 commit b69b60b

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ For detailed {{es-serverless}} project rates, see the [{{es-serverless}} pricing
2727

2828
{{es}} uses three VCU types:
2929

30-
* **Indexing:** The VCUs used to index incoming documents.
31-
* **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require.
32-
* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities.
30+
* **Indexing:** The VCUs used to index incoming documents. Indexing VCUs account for compute resources consumed for ingestion. This is based on ingestion rate, and amount of data ingested at any given time. Transforms and ingest pipelines also contribute to ingest VCU consumption.
31+
* **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require. Search VCUs are calculated as a factor of the compute resources needed to run search queries, search throughput and latency. Search VCUs are not charged per search request, but instead are a factor of the compute resources that scale up and down based on amount of searchable data, search load (QPS) and performance (latency and availability).
32+
* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. ML VCUs are a factor of the models deployed, and number of ML operations such as inference for search and ingest. ML VCUs are typically consumed for generating embeddings during ingestion, and during semantic search or reranking.
3333
* **Tokens:** The Elastic Managed LLM is charged per 1Mn Input and Output tokens. The LLM powers all AI Search features such as Playground and AI Assistant for Search, and is enabled by default.
3434

3535

0 commit comments

Comments
 (0)