Skip to content

Commit 93a9a94

Browse files
committed
[API] Updates source code docs
1 parent 36415a1 commit 93a9a94

16 files changed

+14
-70
lines changed

elasticsearch-api/lib/elasticsearch/api/actions/cat/nodes.rb

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,13 @@ module Actions
2929
# @option arguments [String] :bytes The unit used to display byte values.
3030
# @option arguments [Boolean, String] :full_id If +true+, return the full node ID. If +false+, return the shortened node ID. Server default: false.
3131
# @option arguments [Boolean] :include_unloaded_segments If true, the response includes information from segments that are not loaded into memory.
32-
# @option arguments [String, Array<String>] :h List of columns to appear in the response. Supports simple wildcards.
33-
# @option arguments [String, Array<String>] :s List of columns that determine how the table should be sorted.
32+
# @option arguments [String, Array<String>] :h A comma-separated list of columns names to display.
33+
# It supports simple wildcards. Server default: ip,hp,rp,r,m,n,cpu,l.
34+
# @option arguments [String, Array<String>] :s A comma-separated list of column names or aliases that determines the sort order.
3435
# Sorting defaults to ascending and can be changed by setting +:asc+
3536
# or +:desc+ as a suffix to the column name.
36-
# @option arguments [Time] :master_timeout Period to wait for a connection to the master node. Server default: 30s.
37-
# @option arguments [String] :time Unit used to display time values.
37+
# @option arguments [Time] :master_timeout The period to wait for a connection to the master node. Server default: 30s.
38+
# @option arguments [String] :time The unit used to display time values.
3839
# @option arguments [String] :format Specifies the format to return the columnar data in, can be set to
3940
# +text+, +json+, +cbor+, +yaml+, or +smile+. Server default: text.
4041
# @option arguments [Boolean] :help When set to +true+ will output available columns. This option

elasticsearch-api/lib/elasticsearch/api/actions/inference/chat_completion_unified.rb

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,14 @@ module API
2323
module Inference
2424
module Actions
2525
# Perform chat completion inference
26+
# The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
27+
# It only works with the +chat_completion+ task type for +openai+ and +elastic+ inference services.
28+
# IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
29+
# For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
30+
# NOTE: The +chat_completion+ task type is only available within the _stream API and only supports streaming.
31+
# The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
32+
# The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
33+
# If you use the +openai+ service or the +elastic+ service, use the Chat completion inference API.
2634
#
2735
# @option arguments [String] :inference_id The inference Id (*Required*)
2836
# @option arguments [Time] :timeout Specifies the amount of time to wait for the inference request to complete. Server default: 30s.

elasticsearch-api/lib/elasticsearch/api/actions/inference/put.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,11 +23,6 @@ module API
2323
module Inference
2424
module Actions
2525
# Create an inference endpoint.
26-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
27-
# After creating the endpoint, wait for the model deployment to complete before using it.
28-
# To verify the deployment status, use the get trained model statistics API.
29-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
30-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3126
# IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
3227
# For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models.
3328
# However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_alibabacloud.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create an AlibabaCloud AI Search inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +alibabacloud-ai-search+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The type of the inference task that the model will perform. (*Required*)
3429
# @option arguments [String] :alibabacloud_inference_id The unique identifier of the inference endpoint. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_anthropic.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create an Anthropic inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +anthropic+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The task type.
3429
# The only valid task type for the model to perform is +completion+. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_azureaistudio.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create an Azure AI studio inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +azureaistudio+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The type of the inference task that the model will perform. (*Required*)
3429
# @option arguments [String] :azureaistudio_inference_id The unique identifier of the inference endpoint. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_azureopenai.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,6 @@ module Actions
2828
# * {https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-4-and-gpt-4-turbo-models GPT-4 and GPT-4 Turbo models}
2929
# * {https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-35 GPT-3.5}
3030
# The list of embeddings models that you can choose from in your deployment can be found in the {https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#embeddings Azure models documentation}.
31-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
32-
# After creating the endpoint, wait for the model deployment to complete before using it.
33-
# To verify the deployment status, use the get trained model statistics API.
34-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
35-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3631
#
3732
# @option arguments [String] :task_type The type of the inference task that the model will perform.
3833
# NOTE: The +chat_completion+ task type only supports streaming and only through the _stream API. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_cohere.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create a Cohere inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +cohere+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The type of the inference task that the model will perform. (*Required*)
3429
# @option arguments [String] :cohere_inference_id The unique identifier of the inference endpoint. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_googleaistudio.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create an Google AI Studio inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +googleaistudio+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The type of the inference task that the model will perform. (*Required*)
3429
# @option arguments [String] :googleaistudio_inference_id The unique identifier of the inference endpoint. (*Required*)

elasticsearch-api/lib/elasticsearch/api/actions/inference/put_googlevertexai.rb

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,6 @@ module Inference
2424
module Actions
2525
# Create a Google Vertex AI inference endpoint.
2626
# Create an inference endpoint to perform an inference task with the +googlevertexai+ service.
27-
# When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
28-
# After creating the endpoint, wait for the model deployment to complete before using it.
29-
# To verify the deployment status, use the get trained model statistics API.
30-
# Look for +"state": "fully_allocated"+ in the response and ensure that the +"allocation_count"+ matches the +"target_allocation_count"+.
31-
# Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
3227
#
3328
# @option arguments [String] :task_type The type of the inference task that the model will perform. (*Required*)
3429
# @option arguments [String] :googlevertexai_inference_id The unique identifier of the inference endpoint. (*Required*)

0 commit comments

Comments
 (0)