Skip to content

Commit a0d0a2b

Browse files
committed
initial commit
1 parent df67cc9 commit a0d0a2b

File tree

1 file changed

+34
-34
lines changed

1 file changed

+34
-34
lines changed

content/en/llm_observability/evaluations/managed_evaluations.md

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -134,8 +134,8 @@ Each of these metrics has `ml_app`, `model_server`, `model_provider`, `model_nam
134134
#### Topic relevancy
135135

136136
This check identifies and flags user inputs that deviate from the configured acceptable input topics. This ensures that interactions stay pertinent to the LLM's designated purpose and scope.
137-
138-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
137+
138+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
139139
|---|---|---|
140140
| Evaluated on Input | Evaluated using LLM | Topic relevancy assesses whether each prompt-response pair remains aligned with the intended subject matter of the Large Language Model (LLM) application. For instance, an e-commerce chatbot receiving a question about a pizza recipe would be flagged as irrelevant. |
141141

@@ -156,7 +156,7 @@ This check identifies instances where the LLM makes a claim that disagrees with
156156

157157
{{< img src="llm_observability/evaluations/hallucination_1.png" alt="A Hallucination evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
158158

159-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
159+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
160160
|---|---|---|
161161
| Evaluated on Output | Evaluated using LLM | Hallucination flags any output that disagrees with the context provided to the LLM. |
162162

@@ -217,13 +217,13 @@ This check identifies instances where the LLM fails to deliver an appropriate re
217217

218218
{{< img src="llm_observability/evaluations/failure_to_answer_1.png" alt="A Failure to Answer evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
219219

220-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
220+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
221221
|---|---|---|
222222
| Evaluated on Output | Evaluated using LLM | Failure To Answer flags whether each prompt-response pair demonstrates that the LLM application has provided a relevant and satisfactory answer to the user's question. |
223223

224224
##### Failure to Answer Configuration
225225
<div class="alert alert-info">Configuring failure to answer evaluation categories is supported if OpenAI or Azure OpenAI is selected as your LLM provider.</div>
226-
You can configure the Failure to Answer evaluation to use specific categories of failure to answer, listed in the following table.
226+
You can configure the Failure to Answer evaluation to use specific categories of failure to answer, listed in the following table.
227227

228228
| Configuration Option | Description | Example(s) |
229229
|---|---|---|
@@ -245,7 +245,7 @@ Afrikaans, Albanian, Arabic, Armenian, Azerbaijani, Belarusian, Bengali, Norwegi
245245

246246
{{< img src="llm_observability/evaluations/language_mismatch_1.png" alt="A Language Mismatch evaluation detected by an open source model in LLM Observability" style="width:100%;" >}}
247247

248-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
248+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
249249
|---|---|---|
250250
| Evaluated on Input and Output | Evaluated using Open Source Model | Language Mismatch flags whether each prompt-response pair demonstrates that the LLM application answered the user's question in the same language that the user used. |
251251

@@ -255,7 +255,7 @@ This check helps understand the overall mood of the conversation, gauge user sat
255255

256256
{{< img src="llm_observability/evaluations/sentiment_1.png" alt="A Sentiment evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
257257

258-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
258+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
259259
|---|---|---|
260260
| Evaluated on Input and Output | Evaluated using LLM | Sentiment flags the emotional tone or attitude expressed in the text, categorizing it as positive, negative, or neutral. |
261261

@@ -265,7 +265,7 @@ This check evaluates whether your LLM chatbot can successfully carry out a full
265265

266266
{{< img src="llm_observability/evaluations/goal_completeness.png" alt="A Goal Completeness evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
267267

268-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
268+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
269269
|---|---|---|
270270
| Evaluated on session | Evaluated using LLM | Goal Completeness assesses whether all user intentions within a multi-turn interaction were successfully resolved. The evaluation identifies resolved and unresolved intentions, providing a completeness score based on the ratio of unresolved to total intentions. |
271271

@@ -314,7 +314,7 @@ This check evaluates whether the agent has successfully selected the appropriate
314314

315315
{{< img src="llm_observability/evaluations/tool_selection_failure.png" alt="A tool selection failure detected by the evaluation in LLM Observability" style="width:100%;" >}}
316316

317-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
317+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
318318
|---|---|---|
319319
| Evaluated on LLM spans| Evaluated using LLM | Tool Selection verifies that the tools chosen by the LLM align with the user's request and the available tools. The evaluation identifies cases where irrelevant or incorrect tool calls were made.|
320320

@@ -339,9 +339,9 @@ def subtract_numbers(a: int, b: int) -> int:
339339
Subtracts two numbers.
340340
"""
341341
return a - b
342-
343342

344-
# List of tools available to the agent
343+
344+
# List of tools available to the agent
345345
math_tutor_agent = Agent(
346346
name="Math Tutor",
347347
handoff_description="Specialist agent for math questions",
@@ -360,21 +360,21 @@ history_tutor_agent = Agent(
360360
)
361361

362362
# The triage agent decides which specialized agent to hand off the task to — another type of tool selection covered by this evaluation.
363-
triage_agent = Agent(
363+
triage_agent = Agent(
364364
'openai:gpt-4o',
365365
model_settings=ModelSettings(temperature=0),
366-
instructions='What is the sum of 1 to 10?',
366+
instructions='What is the sum of 1 to 10?',
367367
handoffs=[math_tutor_agent, history_tutor_agent],
368368
)
369369
{{< /code-block >}}
370370

371371
#### Tool argument correctness
372372

373-
This check looks at the arguments provided to a selected tool, and it evaluates whether these arguments match the expected type and make sense given the tool's context.
373+
This check looks at the arguments provided to a selected tool, and it evaluates whether these arguments match the expected type and make sense given the tool's context.
374374

375375
{{< img src="llm_observability/evaluations/tool_argument_correctness_error.png" alt="A tool argument correctness error detected by the evaluation in LLM Observability" style="width:100%;" >}}
376376

377-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
377+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
378378
|---|---|---|
379379
| Evaluated on LLM spans| Evaluated using LLM | Tool Argument Correctness verifies that the arguments provided to a tool by the LLM are correct and contextually relevant. This evaluation identifies cases where the arguments provided to the tool are incorrect according to the tool schema (for example: the argument is expected to be an integer rather than a string) and are not relevant (for example: the argument is a country, but the model provides the name of a city).|
380380

@@ -403,7 +403,7 @@ def subtract_numbers(a: int, b: int) -> int:
403403
"""
404404
return a - b
405405

406-
406+
407407
def multiply_numbers(a: int, b: int) -> int:
408408
"""
409409
Multiplies two numbers.
@@ -441,7 +441,7 @@ history_tutor_agent = Agent(
441441
)
442442

443443
# Create the triage agent
444-
# Note: pydantic_ai handles handoffs differently - you'd typically use result_type
444+
# Note: pydantic_ai handles handoffs differently - you'd typically use result_type
445445
# or custom logic to route between agents
446446
triage_agent = Agent(
447447
'openai:gpt-5-nano',
@@ -470,27 +470,27 @@ result = triage_agent.run_sync(
470470
This check evaluates each input prompt from the user and the response from the LLM application for toxic content. This check identifies and flags toxic content to ensure that interactions remain respectful and safe.
471471

472472
{{< img src="llm_observability/evaluations/toxicity_1.png" alt="A Toxicity evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
473-
474-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
473+
474+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
475475
|---|---|---|
476476
| Evaluated on Input and Output | Evaluated using LLM | Toxicity flags any language or behavior that is harmful, offensive, or inappropriate, including but not limited to hate speech, harassment, threats, and other forms of harmful communication. |
477477

478478
##### Toxicity configuration
479479

480480
<div class="alert alert-info">Configuring toxicity evaluation categories is supported if OpenAI or Azure OpenAI is selected as your LLM provider.</div>
481-
You can configure toxicity evaluations to use specific categories of toxicity, listed in the following table.
481+
You can configure toxicity evaluations to use specific categories of toxicity, listed in the following table.
482482

483-
| Category | Description |
483+
| Category | Description |
484484
|---|---|
485-
| Discriminatory Content | Content that discriminates against a particular group, including based on race, gender, sexual orientation, culture, etc. |
486-
| Harassment | Content that expresses, incites, or promotes negative or intrusive behavior toward an individual or group. |
487-
| Hate | Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. |
488-
| Illicit | Content that asks, gives advice, or instruction on how to commit illicit acts. |
489-
| Self Harm | Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. |
490-
| Sexual | Content that describes or alludes to sexual activity. |
491-
| Violence | Content that discusses death, violence, or physical injury. |
492-
| Profanity | Content containing profanity. |
493-
| User Dissatisfaction | Content containing criticism towards the model. *This category is only available for evaluating input toxicity.* |
485+
| Discriminatory Content | Content that discriminates against a particular group, including based on race, gender, sexual orientation, culture, etc. |
486+
| Harassment | Content that expresses, incites, or promotes negative or intrusive behavior toward an individual or group. |
487+
| Hate | Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. |
488+
| Illicit | Content that asks, gives advice, or instruction on how to commit illicit acts. |
489+
| Self Harm | Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. |
490+
| Sexual | Content that describes or alludes to sexual activity. |
491+
| Violence | Content that discusses death, violence, or physical injury. |
492+
| Profanity | Content containing profanity. |
493+
| User Dissatisfaction | Content containing criticism towards the model. *This category is only available for evaluating input toxicity.* |
494494

495495
The toxicity categories in this table are informed by: [Banko et al. (2020)][14], [Inan et al. (2023)][15], [Ghosh et al. (2024)][16], [Zheng et al. (2024)][17].
496496

@@ -500,13 +500,13 @@ This check identifies attempts by unauthorized or malicious authors to manipulat
500500

501501
{{< img src="llm_observability/evaluations/prompt_injection_1.png" alt="A Prompt Injection evaluation detected by an LLM in LLM Observability" style="width:100%;" >}}
502502

503-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
503+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
504504
|---|---|---|
505505
| Evaluated on Input | Evaluated using LLM | [Prompt Injection][13] flags any unauthorized or malicious insertion of prompts or cues into the conversation by an external party or user. |
506506

507507
##### Prompt injection configuration
508508
<div class="alert alert-info">Configuring prompt injection evaluation categories is supported if OpenAI or Azure OpenAI is selected as your LLM provider.</div>
509-
You can configure the prompt injection evaluation to use specific categories of prompt injection, listed in the following table.
509+
You can configure the prompt injection evaluation to use specific categories of prompt injection, listed in the following table.
510510

511511
| Configuration Option | Description | Example(s) |
512512
|---|---|---|
@@ -520,8 +520,8 @@ You can configure the prompt injection evaluation to use specific categories of
520520
This check ensures that sensitive information is handled appropriately and securely, reducing the risk of data breaches or unauthorized access.
521521

522522
{{< img src="llm_observability/evaluations/sensitive_data_scanning_1.png" alt="A Security and Safety evaluation detected by the Sensitive Data Scanner in LLM Observability" style="width:100%;" >}}
523-
524-
| Evaluation Stage | Evaluation Method | Evaluation Definition |
523+
524+
| Evaluation Stage | Evaluation Method | Evaluation Definition |
525525
|---|---|---|
526526
| Evaluated on Input and Output | Sensitive Data Scanner | Powered by the [Sensitive Data Scanner][4], LLM Observability scans, identifies, and redacts sensitive information within every LLM application's prompt-response pairs. This includes personal information, financial data, health records, or any other data that requires protection due to privacy or security concerns. |
527527

0 commit comments

Comments
 (0)