Commit c5eaead
[8.17] [Attack discovery] Fix error handling in LM studio (elastic#213855) (elastic#214040)
# Backport
This will backport the following commits from `main` to `8.17`:
- [[Attack discovery] Fix error handling in LM studio
(elastic#213855)](elastic#213855)
<!--- Backport version: 9.6.6 -->
### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sorenlouv/backport)
<!--BACKPORT [{"author":{"name":"Patryk
Kopyciński","email":"[email protected]"},"sourceCommit":{"committedDate":"2025-03-12T02:06:48Z","message":"[Attack
discovery] Fix error handling in LM studio (elastic#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87","branchLabelMapping":{"^v9.1.0$":"main","^v8.19.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["bug","release_note:skip","backport:prev-major","Team:Security
Generative AI","Feature:Attack
Discovery","backport:current-major","v9.1.0"],"title":"[Attack
discovery] Fix error handling in LM
studio","number":213855,"url":"https://github.com/elastic/kibana/pull/213855","mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (elastic#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v9.1.0","branchLabelMappingKey":"^v9.1.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/213855","number":213855,"mergeCommit":{"message":"[Attack
discovery] Fix error handling in LM studio (elastic#213855)\n\n##
Summary\n\nError were not properly propagated to the user and instead of
meaningful\nmessage we were displaying just `API Error`.\n\n<img
width=\"1813\" alt=\"Zrzut ekranu 2025-03-11 o 03 47
59\"\nsrc=\"https://github.com/user-attachments/assets/8d059159-f020-4944-a463-b10799e7fa46\"\n/>\n
\n \n \n\nSteps to reproduce, Thank you @andrew-goldstein 🙇 \n\n**Desk
testing**\n\nTo reproduce:\n\n1. In LM Studio, download the `MLX`
variant (optimized for Mac)
of\n`Llama-3.2-3B-Instruct-4bit`:\n\n```\nmlx-community/Llama-3.2-3B-Instruct-4bit\n```\n\n2.
Configure the model to have a context length of `131072` tokens,
as\nillustrated by the screenshot
below:\n\n\n\n\n3.
Serve ONLY the model above in LM Studio. (Ensure no other models
are\nrunning in LM Studio), as illustrated by the screenshot
below:\n\n\n\n\n4.
Configure a connector via the details
in\n<https://www.elastic.co/guide/en/security/current/connect-to-byo-llm.html>\n\nbut
change:\n\n```\nlocal-model\n```\n\nto the name of the model when
configuring the connector:\n\n```\nllama-3.2-3b-instruct\n```\n\nas
illustrated by the screenshot
below:\n\n\n\n\n5.
Generate Attack discoveries\n\n**Expected results**\n\n- Generation does
NOT fail with the error described in the later steps\nbelow.\n- Progress
on generating discoveries is visible in Langsmith, as\nillustrated by
the screenshot
below:\n\n\n\n\nNote:
`Llama-3.2-3B-Instruct-4bit` may not reliably generate
Attack\ndiscoveries, so generation may still fail after `10` generation
/\nrefinement steps.\n\n6. In LM studio, serve a _second_ model, as
illustrated by the\nscreenshot
below:\n\n\n\n\n7.
Once again, generate Attack discoveries\n\n**Expected results**\n\n-
Generation does NOT fail with the errors below\n- Progress on generating
discoveries is visible in Langsmith, though as\nnoted above, generation
may still fail after `10` attempts if the model\ndoes not produce output
that conforms to the expected schema\n\n**Actual results**\n\n-
Generation fails with an error similar to:\n\n```\ngenerate node is
unable to parse (openai) response from attempt 0; (this may be an
incomplete response from the model): Status code: 400. Message: API
Error:\nBad Request: ActionsClientLlm: action result status is error: an
error occurred while running the action - Status code: 400. Message: API
Error: Bad Request,\n```\n\nor\n\n```\ngenerate node is unable to parse
(openai) response from attempt 0; (this may be an incomplete response
from the model): Status code: 404. Message: API Error: Not Found - Model
\"llama-3.2-3b-instruct\" not found. Please specify a valid
model.\n```\n\nas illustrated by the following
screenshot:\n\n\n","sha":"0b9cceb57413ee84c2b951a65d1c8b66523fbd87"}}]}]
BACKPORT-->
Co-authored-by: Patryk Kopyciński <[email protected]>1 parent 13cb2ff commit c5eaead
File tree
3 files changed
+22
-6
lines changed- x-pack/plugins
- elastic_assistant/server/routes/attack_discovery/post/helpers/invoke_attack_discovery_graph
- stack_connectors/server/connector_types/openai
3 files changed
+22
-6
lines changedLines changed: 1 addition & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
74 | 74 | | |
75 | 75 | | |
76 | 76 | | |
| 77 | + | |
77 | 78 | | |
78 | 79 | | |
79 | 80 | | |
| |||
Lines changed: 17 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
643 | 643 | | |
644 | 644 | | |
645 | 645 | | |
| 646 | + | |
| 647 | + | |
| 648 | + | |
| 649 | + | |
| 650 | + | |
| 651 | + | |
| 652 | + | |
| 653 | + | |
| 654 | + | |
| 655 | + | |
| 656 | + | |
| 657 | + | |
| 658 | + | |
| 659 | + | |
| 660 | + | |
| 661 | + | |
| 662 | + | |
646 | 663 | | |
647 | 664 | | |
648 | 665 | | |
| |||
Lines changed: 4 additions & 6 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
144 | 144 | | |
145 | 145 | | |
146 | 146 | | |
| 147 | + | |
| 148 | + | |
147 | 149 | | |
148 | | - | |
149 | | - | |
150 | | - | |
| 150 | + | |
151 | 151 | | |
152 | | - | |
153 | | - | |
154 | | - | |
| 152 | + | |
155 | 153 | | |
156 | 154 | | |
157 | 155 | | |
| |||
0 commit comments