You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: solutions/security/ai/large-language-model-performance-matrix.md
+15-13Lines changed: 15 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,8 @@ applies_to:
12
12
13
13
This page describes the performance of various large language models (LLMs) for different use cases in {{elastic-sec}}, based on our internal testing. To learn more about these use cases, refer to [Attack discovery](/solutions/security/ai/attack-discovery.md) or [AI Assistant](/solutions/security/ai/ai-assistant.md).
14
14
15
-
::::{note}
16
-
`Excellent` is the best rating, followed by `Great`, then by `Good`, and finally by `Poor`.
15
+
::::{important}
16
+
`Excellent` is the best rating, followed by `Great`, then by `Good`, and finally by `Poor`. Models rated `Excellent` or `Great` should produce quality results. Models rated `Good` or `Poor` are not recommended for that use case.
17
17
::::
18
18
19
19
@@ -22,17 +22,19 @@ This page describes the performance of various large language models (LLMs) for
0 commit comments