Skip to content

Commit 1fdf5bc

Browse files
Merge branch 'main' into dont-wipe-xpack
2 parents e3e3719 + a5a8b52 commit 1fdf5bc

File tree

6 files changed

+10
-527
lines changed

6 files changed

+10
-527
lines changed

docs/reference/esql-pandas.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -355,7 +355,7 @@ You can now analyze the data with Pandas or you can also continue transforming t
355355

356356
## Analyze the data with Pandas [analyze-data]
357357

358-
In the next example, the [STATS …​ BY](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-stats-by) command is utilized to count how many employees are speaking a given language. The results are sorted with the `languages` column using [SORT](elasticsearch://reference/query-languages/esql/esql-commands.md#esql-sort):
358+
In the next example, the [STATS …​ BY](elasticsearch://reference/query-languages/esql/commands/processing-commands.md#esql-stats-by) command is utilized to count how many employees are speaking a given language. The results are sorted with the `languages` column using [SORT](elasticsearch://reference/query-languages/esql/commands/processing-commands.md#esql-sort):
359359

360360
```python
361361
response = client.esql.query(

docs/reference/opentelemetry.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ mapped_pages:
55

66
# Using OpenTelemetry [opentelemetry]
77

8-
You can use [OpenTelemetry](https://opentelemetry.io/) to monitor the performance and behavior of your {{es}} requests through the Elasticsearch Python client. The Python client comes with built-in OpenTelemetry instrumentation that emits [distributed tracing spans](docs-content://solutions/observability/apps/traces-2.md) by default. With that, applications using [manual OpenTelemetry instrumentation](https://www.elastic.co/blog/manual-instrumentation-of-python-applications-opentelemetry) or [automatic OpenTelemetry instrumentation](https://www.elastic.co/blog/auto-instrumentation-of-python-applications-opentelemetry) are enriched with additional spans that contain insightful information about the execution of the {{es}} requests.
8+
You can use [OpenTelemetry](https://opentelemetry.io/) to monitor the performance and behavior of your {{es}} requests through the Elasticsearch Python client. The Python client comes with built-in OpenTelemetry instrumentation that emits [distributed tracing spans](docs-content://solutions/observability/apm/traces-ui.md) by default. With that, applications using [manual OpenTelemetry instrumentation](https://www.elastic.co/blog/manual-instrumentation-of-python-applications-opentelemetry) or [automatic OpenTelemetry instrumentation](https://www.elastic.co/blog/auto-instrumentation-of-python-applications-opentelemetry) are enriched with additional spans that contain insightful information about the execution of the {{es}} requests.
99

1010
The native instrumentation in the Python client follows the [OpenTelemetry Semantic Conventions for {{es}}](https://opentelemetry.io/docs/specs/semconv/database/elasticsearch/). In particular, the instrumentation in the client covers the logical layer of {{es}} requests. A single span per request is created that is processed by the service through the Python client. The following image shows a trace that records the handling of two different {{es}} requests: an `info` request and a `search` request.
1111

elasticsearch/_async/client/inference.py

Lines changed: 1 addition & 256 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,7 @@
2020
from elastic_transport import ObjectApiResponse
2121

2222
from ._base import NamespacedClient
23-
from .utils import (
24-
SKIP_IN_PATH,
25-
Stability,
26-
_quote,
27-
_rewrite_parameters,
28-
_stability_warning,
29-
)
23+
from .utils import SKIP_IN_PATH, _quote, _rewrite_parameters
3024

3125

3226
class InferenceClient(NamespacedClient):
@@ -240,178 +234,6 @@ async def get(
240234
path_parts=__path_parts,
241235
)
242236

243-
@_rewrite_parameters(
244-
body_fields=("input", "query", "task_settings"),
245-
)
246-
@_stability_warning(
247-
Stability.DEPRECATED,
248-
version="8.18.0",
249-
message="inference.inference() is deprecated in favor of provider-specific APIs such as inference.put_elasticsearch() or inference.put_hugging_face()",
250-
)
251-
async def inference(
252-
self,
253-
*,
254-
inference_id: str,
255-
input: t.Optional[t.Union[str, t.Sequence[str]]] = None,
256-
task_type: t.Optional[
257-
t.Union[
258-
str,
259-
t.Literal[
260-
"chat_completion",
261-
"completion",
262-
"rerank",
263-
"sparse_embedding",
264-
"text_embedding",
265-
],
266-
]
267-
] = None,
268-
error_trace: t.Optional[bool] = None,
269-
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
270-
human: t.Optional[bool] = None,
271-
pretty: t.Optional[bool] = None,
272-
query: t.Optional[str] = None,
273-
task_settings: t.Optional[t.Any] = None,
274-
timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
275-
body: t.Optional[t.Dict[str, t.Any]] = None,
276-
) -> ObjectApiResponse[t.Any]:
277-
"""
278-
.. raw:: html
279-
280-
<p>Perform inference on the service.</p>
281-
<p>This API enables you to use machine learning models to perform specific tasks on data that you provide as an input.
282-
It returns a response with the results of the tasks.
283-
The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.</p>
284-
<blockquote>
285-
<p>info
286-
The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.</p>
287-
</blockquote>
288-
289-
290-
`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference>`_
291-
292-
:param inference_id: The unique identifier for the inference endpoint.
293-
:param input: The text on which you want to perform the inference task. It can
294-
be a single string or an array. > info > Inference endpoints for the `completion`
295-
task type currently only support a single string as input.
296-
:param task_type: The type of inference task that the model performs.
297-
:param query: The query input, which is required only for the `rerank` task.
298-
It is not required for other tasks.
299-
:param task_settings: Task settings for the individual inference request. These
300-
settings are specific to the task type you specified and override the task
301-
settings specified when initializing the service.
302-
:param timeout: The amount of time to wait for the inference request to complete.
303-
"""
304-
if inference_id in SKIP_IN_PATH:
305-
raise ValueError("Empty value passed for parameter 'inference_id'")
306-
if input is None and body is None:
307-
raise ValueError("Empty value passed for parameter 'input'")
308-
__path_parts: t.Dict[str, str]
309-
if task_type not in SKIP_IN_PATH and inference_id not in SKIP_IN_PATH:
310-
__path_parts = {
311-
"task_type": _quote(task_type),
312-
"inference_id": _quote(inference_id),
313-
}
314-
__path = f'/_inference/{__path_parts["task_type"]}/{__path_parts["inference_id"]}'
315-
elif inference_id not in SKIP_IN_PATH:
316-
__path_parts = {"inference_id": _quote(inference_id)}
317-
__path = f'/_inference/{__path_parts["inference_id"]}'
318-
else:
319-
raise ValueError("Couldn't find a path for the given parameters")
320-
__query: t.Dict[str, t.Any] = {}
321-
__body: t.Dict[str, t.Any] = body if body is not None else {}
322-
if error_trace is not None:
323-
__query["error_trace"] = error_trace
324-
if filter_path is not None:
325-
__query["filter_path"] = filter_path
326-
if human is not None:
327-
__query["human"] = human
328-
if pretty is not None:
329-
__query["pretty"] = pretty
330-
if timeout is not None:
331-
__query["timeout"] = timeout
332-
if not __body:
333-
if input is not None:
334-
__body["input"] = input
335-
if query is not None:
336-
__body["query"] = query
337-
if task_settings is not None:
338-
__body["task_settings"] = task_settings
339-
if not __body:
340-
__body = None # type: ignore[assignment]
341-
__headers = {"accept": "application/json"}
342-
if __body is not None:
343-
__headers["content-type"] = "application/json"
344-
return await self.perform_request( # type: ignore[return-value]
345-
"POST",
346-
__path,
347-
params=__query,
348-
headers=__headers,
349-
body=__body,
350-
endpoint_id="inference.inference",
351-
path_parts=__path_parts,
352-
)
353-
354-
@_rewrite_parameters(
355-
body_name="chat_completion_request",
356-
)
357-
async def post_eis_chat_completion(
358-
self,
359-
*,
360-
eis_inference_id: str,
361-
chat_completion_request: t.Optional[t.Mapping[str, t.Any]] = None,
362-
body: t.Optional[t.Mapping[str, t.Any]] = None,
363-
error_trace: t.Optional[bool] = None,
364-
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
365-
human: t.Optional[bool] = None,
366-
pretty: t.Optional[bool] = None,
367-
) -> ObjectApiResponse[t.Any]:
368-
"""
369-
.. raw:: html
370-
371-
<p>Perform a chat completion task through the Elastic Inference Service (EIS).</p>
372-
<p>Perform a chat completion inference task with the <code>elastic</code> service.</p>
373-
374-
375-
`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-post-eis-chat-completion>`_
376-
377-
:param eis_inference_id: The unique identifier of the inference endpoint.
378-
:param chat_completion_request:
379-
"""
380-
if eis_inference_id in SKIP_IN_PATH:
381-
raise ValueError("Empty value passed for parameter 'eis_inference_id'")
382-
if chat_completion_request is None and body is None:
383-
raise ValueError(
384-
"Empty value passed for parameters 'chat_completion_request' and 'body', one of them should be set."
385-
)
386-
elif chat_completion_request is not None and body is not None:
387-
raise ValueError("Cannot set both 'chat_completion_request' and 'body'")
388-
__path_parts: t.Dict[str, str] = {"eis_inference_id": _quote(eis_inference_id)}
389-
__path = (
390-
f'/_inference/chat_completion/{__path_parts["eis_inference_id"]}/_stream'
391-
)
392-
__query: t.Dict[str, t.Any] = {}
393-
if error_trace is not None:
394-
__query["error_trace"] = error_trace
395-
if filter_path is not None:
396-
__query["filter_path"] = filter_path
397-
if human is not None:
398-
__query["human"] = human
399-
if pretty is not None:
400-
__query["pretty"] = pretty
401-
__body = (
402-
chat_completion_request if chat_completion_request is not None else body
403-
)
404-
__headers = {"accept": "application/json", "content-type": "application/json"}
405-
return await self.perform_request( # type: ignore[return-value]
406-
"POST",
407-
__path,
408-
params=__query,
409-
headers=__headers,
410-
body=__body,
411-
endpoint_id="inference.post_eis_chat_completion",
412-
path_parts=__path_parts,
413-
)
414-
415237
@_rewrite_parameters(
416238
body_name="inference_config",
417239
)
@@ -1088,83 +910,6 @@ async def put_cohere(
1088910
path_parts=__path_parts,
1089911
)
1090912

1091-
@_rewrite_parameters(
1092-
body_fields=("service", "service_settings"),
1093-
)
1094-
async def put_eis(
1095-
self,
1096-
*,
1097-
task_type: t.Union[str, t.Literal["chat_completion"]],
1098-
eis_inference_id: str,
1099-
service: t.Optional[t.Union[str, t.Literal["elastic"]]] = None,
1100-
service_settings: t.Optional[t.Mapping[str, t.Any]] = None,
1101-
error_trace: t.Optional[bool] = None,
1102-
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
1103-
human: t.Optional[bool] = None,
1104-
pretty: t.Optional[bool] = None,
1105-
body: t.Optional[t.Dict[str, t.Any]] = None,
1106-
) -> ObjectApiResponse[t.Any]:
1107-
"""
1108-
.. raw:: html
1109-
1110-
<p>Create an Elastic Inference Service (EIS) inference endpoint.</p>
1111-
<p>Create an inference endpoint to perform an inference task through the Elastic Inference Service (EIS).</p>
1112-
1113-
1114-
`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-eis>`_
1115-
1116-
:param task_type: The type of the inference task that the model will perform.
1117-
NOTE: The `chat_completion` task type only supports streaming and only through
1118-
the _stream API.
1119-
:param eis_inference_id: The unique identifier of the inference endpoint.
1120-
:param service: The type of service supported for the specified task type. In
1121-
this case, `elastic`.
1122-
:param service_settings: Settings used to install the inference model. These
1123-
settings are specific to the `elastic` service.
1124-
"""
1125-
if task_type in SKIP_IN_PATH:
1126-
raise ValueError("Empty value passed for parameter 'task_type'")
1127-
if eis_inference_id in SKIP_IN_PATH:
1128-
raise ValueError("Empty value passed for parameter 'eis_inference_id'")
1129-
if service is None and body is None:
1130-
raise ValueError("Empty value passed for parameter 'service'")
1131-
if service_settings is None and body is None:
1132-
raise ValueError("Empty value passed for parameter 'service_settings'")
1133-
__path_parts: t.Dict[str, str] = {
1134-
"task_type": _quote(task_type),
1135-
"eis_inference_id": _quote(eis_inference_id),
1136-
}
1137-
__path = f'/_inference/{__path_parts["task_type"]}/{__path_parts["eis_inference_id"]}'
1138-
__query: t.Dict[str, t.Any] = {}
1139-
__body: t.Dict[str, t.Any] = body if body is not None else {}
1140-
if error_trace is not None:
1141-
__query["error_trace"] = error_trace
1142-
if filter_path is not None:
1143-
__query["filter_path"] = filter_path
1144-
if human is not None:
1145-
__query["human"] = human
1146-
if pretty is not None:
1147-
__query["pretty"] = pretty
1148-
if not __body:
1149-
if service is not None:
1150-
__body["service"] = service
1151-
if service_settings is not None:
1152-
__body["service_settings"] = service_settings
1153-
if not __body:
1154-
__body = None # type: ignore[assignment]
1155-
__headers = {"accept": "application/json"}
1156-
if __body is not None:
1157-
__headers["content-type"] = "application/json"
1158-
return await self.perform_request( # type: ignore[return-value]
1159-
"PUT",
1160-
__path,
1161-
params=__query,
1162-
headers=__headers,
1163-
body=__body,
1164-
endpoint_id="inference.put_eis",
1165-
path_parts=__path_parts,
1166-
)
1167-
1168913
@_rewrite_parameters(
1169914
body_fields=(
1170915
"service",

0 commit comments

Comments
 (0)