https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put
The previous documentation included an important note, now absent, regarding the inference APIs: “When creating an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. You can verify the deployment status by using the Get trained model statistics API. In the response, look for "state": "fully_allocated" and ensure the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.”