You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: contribute-docs/api-docs/kibana-api-docs-quickstart.md
+18-4Lines changed: 18 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -202,7 +202,7 @@ responses:
202
202
:sync: code-generated
203
203
204
204
:::{note}
205
-
**This step is optional.** CI will automatically capture the snapshot when you push your `.ts` changes. Running this locally is useful for validating changes before pushing or debugging issues. See [`capture_oas_snapshot.sh`](https://github.com/elastic/kibana/blob/main/.buildkite/scripts/steps/checks/capture_oas_snapshot.sh) for the full list of paths captured in CI.
205
+
**This step is optional.** CI will automatically capture the snapshot when you push your `.ts` changes. Running this locally is useful for validating changes before pushing or debugging issues.
206
206
:::
207
207
208
208
This step captures the OpenAPI specification that {{kib}} generates at runtime from your route definitions. It spins up a local {{es}} and {{kib}} cluster with your code changes. This generates the following output files in the `oas_docs` directory:
@@ -214,13 +214,27 @@ This step captures the OpenAPI specification that {{kib}} generates at runtime f
214
214
- [Docker](https://docs.docker.com/get-docker/) must be running
215
215
- If you're an Elastician, ensure you're logged into Docker with your Elastic account
216
216
217
-
**Capture all API paths** (recommended):
217
+
To capture all the documented API paths, copy the command from [`capture_oas_snapshot.sh`](https://github.com/elastic/kibana/blob/main/.buildkite/scripts/steps/checks/capture_oas_snapshot.sh). For example:
218
218
219
219
```bash
220
-
node scripts/capture_oas_snapshot --update
220
+
node scripts/capture_oas_snapshot \
221
+
--include-path /api/status \
222
+
--include-path /api/alerting/rule/ \
223
+
--include-path /api/alerting/rules \
224
+
--include-path /api/actions \
225
+
--include-path /api/security/role \
226
+
--include-path /api/spaces \
227
+
--include-path /api/streams \
228
+
--include-path /api/fleet \
229
+
--include-path /api/saved_objects/_import \
230
+
--include-path /api/saved_objects/_export \
231
+
--include-path /api/maintenance_window \
232
+
--include-path /api/agent_builder
233
+
--update
221
234
```
222
235
223
-
**For faster iteration**, capture the specific paths you're working on:
236
+
For faster iteration, you can capture the specific paths you're working on, though this minimized output should not be included in your pull request.
Copy file name to clipboardExpand all lines: deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ Deployment capacity typically constitutes the majority of your bill, and is the
30
30
31
31
### How can I control the deployment capacity cost? [ec_how_can_i_control_the_deployment_capacity_cost]
32
32
33
-
Deployment capacity is purely a function of your current deployment configuration and time. To reduce this cost, you must [configure your deployment](../../deploy/elastic-cloud/configure.md) to use fewer resources. To determine how much a particular deployment configuration will cost, try our [pricing calculator](https://cloud.elastic.co/pricing).
33
+
Deployment capacity is purely a function of your current deployment configuration and time. To reduce this cost, you must [configure your deployment](../../deploy/elastic-cloud/configure.md) to use fewer resources. To determine how much a particular deployment configuration will cost, try our {{ech}} [pricing calculator](https://cloud.elastic.co/pricing).
Copy file name to clipboardExpand all lines: deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ For detailed {{es-serverless}} project rates, refer to the [{{es-serverless}} pr
33
33
***Indexing:** The VCUs used to index incoming documents. Indexing VCUs account for compute resources consumed for ingestion. This is based on ingestion rate and amount of data ingested at any given time. Transforms and ingest pipelines also contribute to ingest VCU consumption.
34
34
***Search:** The VCUs used to return search results with the latency and queries per second (QPS) you require. Search VCUs are calculated as a factor of the compute resources needed to run search queries, search throughput, and latency. Search VCUs are not charged per search request. Instead, they are a factor of the compute resources that scale up and down based on amount of searchable data, search load (QPS), and performance (latency and availability).
35
35
***Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. ML VCUs are a factor of the models deployed and number of ML operations such as inference for search and ingest. ML VCUs are typically consumed for generating embeddings during ingestion and during semantic search or reranking.
36
-
***Tokens:** The Elastic Managed LLM is charged per 1 million input and output tokens. The LLM powers all AI Search features such as Playground and AI Assistant for Searchand is enabled by default.
36
+
***Tokens:**[The Elastic Inference Service](https://www.elastic.co/docs/explore-analyze/elastic-inference/eis) is charged based on tokens used with machine learning models. For embeddings and rerankers, usage is billed per million input tokens sent to the models. For LLMs, this is either per 1 million input or per 1 million output tokens. Elastic Managed LLMs can power all AI Search features (such as Playground and AI Assistant for Search), as well as features in the Security and Observability products, and are enabled by default.
37
37
38
38
## Data storage and billing [elasticsearch-billing-information-about-the-search-ai-lake-dimension-gb]
Copy file name to clipboardExpand all lines: deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ products:
15
15
*[Offerings](#offerings)
16
16
*[Add-ons](#add-ons)
17
17
18
-
Specific prices can be found in the [Cloud Pricing Table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless).
18
+
Specific prices can be found in the [Cloud Pricing Table](https://cloud.elastic.co/cloud-pricing-table?productType=serverless) or you can create an [Elastic Cloud Serverless Estimate](https://cloud.elastic.co/pricing/serverless).
Copy file name to clipboardExpand all lines: deploy-manage/deploy/cloud-enterprise/resize-deployment.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ To resize a deployment:
34
34
::::
35
35
36
36
RAM per instance
37
-
: Node and instance capacity should be sufficient to sustain your search workload, even if you lose an availability zone. Currently, half of the memory is assigned to the JVM heap. For example, on an {{es}} cluster node with 32 GB RAM, 16 GB would be allotted to heap. Up to 64 GB RAM and 1 TB storage per node are supported.
37
+
: Node and instance capacity should be sufficient to sustain your search workload, even if you lose an availability zone. For instances up to 64 GB of RAM, half the memory is assigned to the JVM heap. For instances larger than 64 GB, the heap size is capped at 32 GB. For example, on an {{es}} cluster node with 32 GB RAM, 16 GB would be allotted to heap, while on a 128 GB node, 32 GB would be allotted to heap. Up to 256 GB RAM and 1 TB storage per node are supported.
38
38
39
39
Before finalizing your changes, you can review the **Architecture** summary, which shows the total number of instances per zone, with each circle color representing a different type of instance.
0 commit comments