diff --git a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md index 0d806dd7d9..baeacf5ea2 100644 --- a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md @@ -27,9 +27,9 @@ For detailed {{es-serverless}} project rates, see the [{{es-serverless}} pricing {{es}} uses three VCU types: -* **Indexing:** The VCUs used to index incoming documents. -* **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require. -* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. +* **Indexing:** The VCUs used to index incoming documents. Indexing VCUs account for compute resources consumed for ingestion. This is based on ingestion rate, and amount of data ingested at any given time. Transforms and ingest pipelines also contribute to ingest VCU consumption. +* **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require. Search VCUs are calculated as a factor of the compute resources needed to run search queries, search throughput and latency. Search VCUs are not charged per search request, but instead are a factor of the compute resources that scale up and down based on amount of searchable data, search load (QPS) and performance (latency and availability). +* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. ML VCUs are a factor of the models deployed, and number of ML operations such as inference for search and ingest. ML VCUs are typically consumed for generating embeddings during ingestion, and during semantic search or reranking. * **Tokens:** The Elastic Managed LLM is charged per 1Mn Input and Output tokens. The LLM powers all AI Search features such as Playground and AI Assistant for Search, and is enabled by default.