Skip to content

Commit a8e2489

Browse files
committed
[DOCS] Adds examples to inference processor docs.
1 parent 6b5f6fb commit a8e2489

File tree

1 file changed

+61
-0
lines changed

1 file changed

+61
-0
lines changed

docs/reference/ingest/processors/inference.asciidoc

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -735,3 +735,64 @@ You can also specify the target field as follows:
735735

736736
In this case, {feat-imp} is exposed in the
737737
`my_field.foo.feature_importance` field.
738+
739+
740+
[discrete]
741+
[[inference-processor-examples]]
742+
==== {infer-cap} processor examples
743+
744+
The following examples uses an <<inference-apis,{infer} endpoint>> in an {infer} processor with the name of `query_helper_pipeline` to perform a chat completion task.
745+
746+
747+
[source,console]
748+
--------------------------------------------------
749+
PUT _ingest/pipeline/query_helper_pipeline
750+
{
751+
"processors": [
752+
{
753+
"script": {
754+
"source": "ctx.prompt = 'Please generate an elasticsearch search query on index `articles_index` for the following natural language query. Dates are in the field `@timestamp`, document types are in the field `type` (options are `news`, `publication`), categories in the field `category` and can be multiple (options are `medicine`, `pharmaceuticals`, `technology`), and document names are in the field `title` which should use a fuzzy match. Ignore fields which cannot be determined from the natural language query context: ' + ctx.content" <1>
755+
}
756+
},
757+
{
758+
"inference": {
759+
"model_id": "openai_chat_completions", <2>
760+
"input_output": {
761+
"input_field": "prompt",
762+
"output_field": "query"
763+
}
764+
}
765+
},
766+
{
767+
"remove": {
768+
"field": "prompt"
769+
}
770+
}
771+
]
772+
}
773+
--------------------------------------------------
774+
// NOTCONSOLE
775+
<1> The `prompt` field contains the prompt for the chat completion task.
776+
<2>
777+
778+
779+
[source,console]
780+
--------------------------------------------------
781+
POST _ingest/pipeline/query_helper_pipeline/_simulate
782+
{
783+
"docs": [
784+
{
785+
"_source": {
786+
"content": "artificial intelligence in medicine articles published in the last 12 months"
787+
}
788+
}
789+
]
790+
}
791+
--------------------------------------------------
792+
// NOTCONSOLE
793+
794+
[discrete]
795+
[[infer-proc-readings]]
796+
==== Further readings
797+
798+
* https://www.elastic.co/search-labs/blog/openwebcrawler-llms-semantic-text-resume-job-search[Which job is the best for you? Using LLMs and semantic_text to match resumes to jobs]

0 commit comments

Comments
 (0)