Skip to content

Conversation

@jonathan-buttner
Copy link
Contributor

This PR makes the original request available to the error handling logic so that we can access the inference id. This will be used in the custom service PR by the generic error handler in a failure log message. The request isn't currently used in these changes, I'm just changing the required interfaces.

@jonathan-buttner jonathan-buttner added >non-issue :ml Machine learning Team:ML Meta label for the ML team auto-backport Automatically create backport pull requests when merged v8.19.0 v9.1.0 labels May 7, 2025
@jonathan-buttner jonathan-buttner marked this pull request as ready for review May 8, 2025 15:31
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/ml-core (Team:ML)

return fromResponse(response, "");
}

public static ErrorResponse fromResponse(Request request, HttpResult response) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit unclear why we drop the request here? Is this where we would make a follow-up change to utilize the request?

super(errorMessage);
}

public static ErrorResponse fromResponse(Request request, HttpResult response) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add this to the ErrorResponse class to avoid duplication?

protected final String requestType;
private final ResponseParser parseFunction;
private final Function<HttpResult, ErrorResponse> errorParseFunction;
private final BiFunction<Request, HttpResult, ErrorResponse> errorParseFunction;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to understand the context better, why do we need the inference ID as part of the error in the custom service PR? Is there a reason we need the full Request here instead of passing just the inference ID?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I can just pass the inference id to limit the scope 👍

@jonathan-buttner
Copy link
Contributor Author

Dan and I chatted and turns out we don't need these changes 🙌 so closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auto-backport Automatically create backport pull requests when merged :ml Machine learning >non-issue Team:ML Meta label for the ML team v8.19.0 v9.1.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants