-
Notifications
You must be signed in to change notification settings - Fork 135
Explain VCU consumption better #2602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Explain Ingest, Search, ML VCU consumption better. I see this confusion on - Are search VCUs directly related to number of searches? Are ingest VCUs directly related to number of ingest operations? I have added info to break down what search/ingest/ML VCU consumption is driven by.
🔍 Preview links for changed docs |
deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md
Outdated
Show resolved
Hide resolved
…-dimensions.md Co-authored-by: David Kilfoyle <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 🚀
Thanks @shubhaat
* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. | ||
* **Indexing:** The VCUs used to index incoming documents. Indexing VCUs account for compute resources consumed for ingestion. This is based on ingestion rate, and amount of data ingested at any given time. Transforms and ingest pipelines also contribute to ingest VCU consumption. | ||
* **Search:** The VCUs used to return search results, with the latency and queries per second (QPS) you require. Search VCUs are calculated as a factor of the compute resources needed to run search queries, search throughput and latency. Search VCUs are not charged per search request, but instead are a factor of the compute resources that scale up and down based on amount of searchable data, search load (QPS) and performance (latency and availability). | ||
* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. ML VCUs are a factor of the models deployed, and number of ML operations such as inference for search and ingest. ML VCUs are typically consumed for generating embeddings during ingestion, and during semantic search or reranking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shubhaat Catching up on GH notifications :D I wonder if we want to mention that ML VCUs only apply if they use models deployed on ML nodes, i.e. if they use Elastic Managed LLMs, there is no ML VCU costs, only token charges.
Explain Ingest, Search, ML VCU consumption better. I see this confusion on -
Are search VCUs directly related to number of searches? Are ingest VCUs directly related to number of ingest operations?
I have added info to break down what search/ingest/ML VCU consumption is driven by.