Skip to content

Commit 8f019fb

Browse files
Update semantic search overview page (#2261)
This PR updates the semantic search overview page to reflect: - simplified workflows (semantic_text, Inference API) - semantic_text workflow: inference endpoint creation is optional - Inference api workflow: setting up an ingest pipeline is optional - Lowering complexity from `Medium` to `Moderate` Based on: elastic/developer-docs-team#315
1 parent 2390419 commit 8f019fb

File tree

2 files changed

+260
-49
lines changed

2 files changed

+260
-49
lines changed
Lines changed: 254 additions & 47 deletions
Loading

solutions/search/semantic-search.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ This page focuses on the semantic search workflows available in {{es}}. For deta
1818

1919
{{es}} provides various semantic search capabilities using [natural language processing (NLP)](/explore-analyze/machine-learning/nlp.md) and [vector search](vector.md).
2020

21+
To understand the infrastructure that powers semantic search and other NLP tasks, including managed services and inference endpoints, see the [Elastic Inference overview](../../explore-analyze/elastic-inference.md) page.
22+
2123
Learn more about use cases for AI-powered search in the [overview](ai-search/ai-search.md) page.
2224

2325
## Overview of semantic search workflows [semantic-search-workflows-overview]
@@ -38,13 +40,15 @@ This diagram summarizes the relative complexity of each workflow:
3840

3941
### Option 1: `semantic_text` [_semantic_text_workflow]
4042

41-
The simplest way to use NLP models in the {{stack}} is through the [`semantic_text` workflow](semantic-search/semantic-search-semantic-text.md). We recommend using this approach because it abstracts away a lot of manual work. All you need to do is create an {{infer}} endpoint and an index mapping to start ingesting, embedding, and querying data. There is no need to define model-related settings and parameters, or to create {{infer}} ingest pipelines. For more information about the supported services, refer to [](/explore-analyze/elastic-inference/inference-api.md) and the [{{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) documentation .
43+
The simplest way to use NLP models in the {{stack}} is through the [`semantic_text` workflow](semantic-search/semantic-search-semantic-text.md). We recommend using this approach because it abstracts away a lot of manual work. All you need to do is create an index mapping to start ingesting, embedding, and querying data. There is no need to define model-related settings and parameters, or to create {{infer}} ingest pipelines.
44+
45+
To learn more about supported services, refer to [](/explore-analyze/elastic-inference/inference-api.md) and the [{{infer}} API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-inference) documentation.
4246

4347
For an end-to-end tutorial, refer to [Semantic search with `semantic_text`](semantic-search/semantic-search-semantic-text.md).
4448

4549
### Option 2: Inference API [_infer_api_workflow]
4650

47-
The {{infer}} API workflow is more complex but offers greater control over the {{infer}} endpoint configuration. You need to create an {{infer}} endpoint, provide various model-related settings and parameters, define an index mapping, and set up an {{infer}} ingest pipeline with the appropriate settings.
51+
The {{infer}} API workflow is more complex but offers greater control over the {{infer}} endpoint configuration. You need to create an {{infer}} endpoint, provide various model-related settings and parameters, and define an index mapping. Optionally you can also set up an {{infer}} ingest pipeline for automatic embedding during data ingestion, or alternatively, you can manually call the {{infer}} API.
4852

4953
For an end-to-end tutorial, refer to [Semantic search with the {{infer}} API](semantic-search/semantic-search-inference.md).
5054

0 commit comments

Comments
 (0)