Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 9 additions & 14 deletions docs/en/stack/ml/nlp/ml-nlp-overview.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,17 @@ natural language in spoken word or written text.
Elastic offers a wide range of possibilities to leverage natural language
processing.

You can **integrate NLP models from different providers** such as Cohere,
HuggingFace, or OpenAI and use them as a service through the
{ref}/inference-apis.html[{infer} API]. You can also use <<ml-nlp-elser,ELSER>>
(the retrieval model trained by Elastic) and <<ml-nlp-e5,E5>> in the same way.
This {ref}/semantic-search-inference.html[tutorial] walks you through the
process of using the various services with the {infer} API.
You can **integrate NLP models from different providers** such as Cohere, HuggingFace, or OpenAI and use them as a service through the {ref}/semantic-search-semantic-text.html[semantic_text] workflow.
You can also use <<ml-nlp-elser,ELSER>> (the retrieval model trained by Elastic) and <<ml-nlp-e5,E5>> in the same way.

You can **upload and manage NLP models** using the Eland client and the
<<ml-nlp-deploy-models,{stack}>>. Find the
<<ml-nlp-model-ref,list of recommended and compatible models here>>. Refer to
<<ml-nlp-examples>> to learn more about how to use {ml} models deployed in your
cluster.
The {ref}/inference-apis.html[{infer} API] enables you to use the same services with a more complex workflow which - in turn - offers greater control over the configurations settings.
This {ref}/semantic-search-inference.html[tutorial] walks you through the process of using the various services with the {infer} API.

You can **store embeddings in your {es} vector database** if you generate
{ref}/dense-vector.html[dense vector] or {ref}/sparse-vector.html[sparse vector]
model embeddings outside of {es}.
You can **upload and manage NLP models** using the Eland client and the <<ml-nlp-deploy-models,{stack}>>.
Find the <<ml-nlp-model-ref,list of recommended and compatible models here>>.
Refer to <<ml-nlp-examples>> to learn more about how to use {ml} models deployed in your cluster.

You can **store embeddings in your {es} vector database** if you generate {ref}/dense-vector.html[dense vector] or {ref}/sparse-vector.html[sparse vector] model embeddings outside of {es}.


[discrete]
Expand Down