Skip to content

Commit 30be44d

Browse files
committed
Addresses feedback.
1 parent 6858ebb commit 30be44d

File tree

2 files changed

+13
-8
lines changed

2 files changed

+13
-8
lines changed

explore-analyze/elastic-inference.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,15 @@ navigation_title: Elastic Inference
1010
## Overview
1111

1212
{{infer-cap}} is a process of using a {{ml}} trained model to make predictions or operations - such as text embedding, or reranking - on your data.
13-
You can use {{infer}} during ingest time (for example, to create embeddings from textual data you ingest) or search time (to perform [semantic search](/solutions/search/semantic-search.md)).
14-
There are several ways to perform {{infer}} in the {{stack}}:
13+
You can use {{infer}} during ingest time (for example, to create embeddings from textual data you ingest) or search time (to perform [semantic search](/solutions/search/semantic-search.md) based on the embeddings created previously).
14+
There are several ways to perform {{infer}} in the {{stack}}, depending on the underlying {{infer}} infrastructure and the interface you use:
1515

16-
* [Using the Elastic {{infer-cap}} Service](elastic-inference/eis.md)
17-
* [Using `semantic_text` if you want to perform semantic search](/solutions/search/semantic-search/semantic-search-semantic-text.md)
18-
* [Using the {{infer}} API](elastic-inference/inference-api.md)
19-
* [Trained models deployed in your cluster](machine-learning/nlp/ml-nlp-overview.md)
16+
- **{{infer-cap}} infrastructure:**
17+
18+
- [Elastic {{infer-cap}} Service](elastic-inference/eis.md): a managed service that runs {infer} outside your cluster resources.
19+
- [Trained models deployed in your cluster](machine-learning/nlp/ml-nlp-overview.md): models run on your own {{ml}} nodes
20+
21+
- **Access methods:**
22+
23+
- [The `semantic_text` workflow](/solutions/search/semantic-search/semantic-search-semantic-text.md): a simplified method that uses the {{infer}} API behind the scenes to enable semantic search.
24+
- [The {{infer}} API](elastic-inference/inference-api.md): a general-purpose API that enables you to run {{infer}} using EIS, your own models, or third-party services.

explore-analyze/elastic-inference/eis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ Instead, you can use {{ml}} models for ingest, search, and chat independently of
1919

2020
## Region and hosting [eis-regions]
2121

22-
Requests through the Elastic Managed LLM are currently proxying to AWS Bedrock in AWS US regions, beginning with `us-east-1`.
22+
Requests through the `Elastic Managed LLM` are currently proxying to AWS Bedrock in AWS US regions, beginning with `us-east-1`.
2323
The request routing does not restrict the location of your deployments.
2424

25-
ELSER requests are managed by Elastic own EIS infrastructure.
25+
ELSER requests are managed by Elastic's own EIS infrastructure.
2626

2727
## ELSER via Elastic {{infer-cap}} Service (ELSER on EIS)
2828

0 commit comments

Comments
 (0)