Skip to content

Commit 5fd0359

Browse files
Document metadata and enriched eval model for node js + add SDS link (#34596)
* Document metadata and enriched eval model for node js * add SDS link * Update content/en/llm_observability/instrumentation/sdk.md Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com> * Update content/en/llm_observability/instrumentation/sdk.md Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com> * Update content/en/llm_observability/instrumentation/sdk.md Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com> * Update content/en/llm_observability/instrumentation/sdk.md Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com> * Update content/en/llm_observability/evaluations/_index.md Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com> --------- Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com>
1 parent de9a8ec commit 5fd0359

File tree

2 files changed

+24
-5
lines changed

2 files changed

+24
-5
lines changed

content/en/llm_observability/evaluations/_index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Datadog also supports integrations with some 3rd party evaluation frameworks, su
4242

4343
### Sensitive Data Scanner integration
4444

45-
In addition to evaluating the input and output of LLM requests, agents, workflows, or the application, LLM Observability integrates with [Sensitive Data Scanner][6], which helps prevent data leakage by identifying and redacting any sensitive information.
45+
In addition to evaluating the input and output of LLM requests, agents, workflows, or the application, LLM Observability integrates with [Sensitive Data Scanner][6], which helps prevent data leakage by identifying and redacting any sensitive information. For a list of the out-of-the-box rules included with Sensitive Data Scanner, see [Library Rules][12].
4646

4747
### Security
4848

@@ -73,4 +73,5 @@ LLM Observability offers an [Export API][9] that you can use to retrieve spans f
7373
[9]: /llm_observability/evaluations/export_api
7474
[10]: /llm_observability/guide/evaluation_developer_guide
7575
[11]: /llm_observability/evaluations/annotation_queues
76+
[12]: /security/sensitive_data_scanner/scanning_rules/library_rules/
7677

content/en/llm_observability/instrumentation/sdk.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2093,6 +2093,10 @@ The `LLMObs.submit_evaluation()` method accepts the following arguments:
20932093
`reasoning`
20942094
: optional - _string_
20952095
<br />A text explanation of the evaluation result.
2096+
2097+
`metadata`
2098+
: optional - _dictionary_
2099+
<br />A dictionary containing arbitrary structured metadata associated with the evaluation result.
20962100
{{% /collapse-content %}}
20972101

20982102
#### Example
@@ -2122,7 +2126,8 @@ def llm_call():
21222126
value=10,
21232127
tags={"evaluation_provider": "ragas"},
21242128
assessment="fail",
2125-
reasoning="Malicious intent was detected in the user instructions."
2129+
reasoning="Malicious intent was detected in the user instructions.",
2130+
metadata={"details": ["jailbreak", "SQL injection"]}
21262131
)
21272132

21282133
# joining an evaluation to a span via span ID and trace ID
@@ -2135,7 +2140,8 @@ def llm_call():
21352140
value=10,
21362141
tags={"evaluation_provider": "ragas"},
21372142
assessment="fail",
2138-
reasoning="Malicious intent was detected in the user instructions."
2143+
reasoning="Malicious intent was detected in the user instructions.",
2144+
metadata={"details": ["jailbreak", "SQL injection"]}
21392145
)
21402146
return completion
21412147
{{< /code-block >}}
@@ -2168,15 +2174,27 @@ The `evaluationOptions` object can contain the following:
21682174

21692175
`metricType`
21702176
: required - _string_
2171-
<br />The type of the evaluation. Must be one of "categorical" or "score".
2177+
<br />The type of the evaluation. Must be one of "categorical", "score", "boolean" or "json".
21722178

21732179
`value`
21742180
: required - _string or numeric type_
2175-
<br />The value of the evaluation. Must be a string (for categorical `metric_type`) or number (for score `metric_type`).
2181+
<br />The value of the evaluation. Must be a string (for categorical `metric_type`), number (for score `metric_type`), boolean (for boolean `metric_type`), or a JSON object (for json `metric_type`).
21762182

21772183
`tags`
21782184
: optional - _dictionary_
21792185
<br />A dictionary of string key-value pairs that users can add as tags regarding the evaluation. For more information about tags, see [Getting Started with Tags](/getting_started/tagging/).
2186+
2187+
`assessment`
2188+
: optional - _string_
2189+
<br />An assessment of this evaluation. Accepted values are `pass` and `fail`.
2190+
2191+
`reasoning`
2192+
: optional - _string_
2193+
<br />A text explanation of the evaluation result.
2194+
2195+
`metadata`
2196+
: optional - _dictionary_
2197+
<br />A JSON object containing arbitrary structured metadata associated with the evaluation result.
21802198
{{% /collapse-content %}}
21812199

21822200
#### Example

0 commit comments

Comments
 (0)