Skip to content

Commit 10bf326

Browse files
Merge branch 'main' into add-query-rules-ui-link-to-query-rules-api-reference
2 parents cb97299 + 65e7af7 commit 10bf326

File tree

7 files changed

+23
-18
lines changed

7 files changed

+23
-18
lines changed

output/openapi/elasticsearch-openapi.json

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

output/openapi/elasticsearch-serverless-openapi.json

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

output/schema/schema.json

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

specification/inference/put/examples/request/InferencePutExample1.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,10 @@ value: |-
66
"service_settings": {
77
"model_id": "rerank-english-v3.0",
88
"api_key": "{{COHERE_API_KEY}}"
9-
}
9+
},
1010
"chunking_settings": {
1111
"strategy": "recursive",
1212
"max_chunk_size": 200,
1313
"separator_group": "markdown"
14+
}
1415
}

specification/inference/put_llama/examples/request/PutLlamaRequestExample1.yaml

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
# summary:
2-
description: Run `PUT _inference/text_embedding/llama-text-embedding` to create a Llama inference endpoint that performs a `text_embedding` task.
2+
description:
3+
Run `PUT _inference/text_embedding/llama-text-embedding` to create a Llama inference endpoint that performs a
4+
`text_embedding` task.
35
method_request: 'PUT _inference/text_embedding/llama-text-embedding'
46
# type: "request"
57
value: |-
68
{
79
"service": "llama",
810
"service_settings": {
9-
"url": "http://localhost:8321/v1/inference/embeddings"
11+
"url": "http://localhost:8321/v1/inference/embeddings",
1012
"dimensions": 384,
1113
"model_id": "all-MiniLM-L6-v2"
1214
}

specification/inference/put_llama/examples/request/PutLlamaRequestExample2.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ value: |-
66
{
77
"service": "llama",
88
"service_settings": {
9-
"url": "http://localhost:8321/v1/openai/v1/chat/completions"
9+
"url": "http://localhost:8321/v1/openai/v1/chat/completions",
1010
"model_id": "llama3.2:3b"
1111
}
1212
}
Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
11
# summary:
2-
description: Run `PUT _inference/chat-completion/llama-chat-completion` to create a Llama inference endpoint that performs a `chat_completion` task.
2+
description:
3+
Run `PUT _inference/chat-completion/llama-chat-completion` to create a Llama inference endpoint that performs a
4+
`chat_completion` task.
35
method_request: 'PUT _inference/chat-completion/llama-chat-completion'
46
# type: "request"
57
value: |-
68
{
79
"service": "llama",
810
"service_settings": {
9-
"url": "http://localhost:8321/v1/openai/v1/chat/completions"
11+
"url": "http://localhost:8321/v1/openai/v1/chat/completions",
1012
"model_id": "llama3.2:3b"
1113
}
1214
}

0 commit comments

Comments
 (0)