Skip to content

Commit b19a529

Browse files
committed
Acrolinx fix
1 parent 339c681 commit b19a529

File tree

1 file changed

+1
-1
lines changed
  • learn-pr/wwl-data-ai/evaluate-language-models-azure-databricks/includes

1 file changed

+1
-1
lines changed

learn-pr/wwl-data-ai/evaluate-language-models-azure-databricks/includes/1-introduction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@ Large Language Models (LLMs) have transformed how we build applications, powerin
22

33
Evaluation is essential for successfully deploying LLMs to production. You need to understand how well your model performs, whether it produces reliable outputs, and how it behaves across different scenarios.
44

5-
In this module, you'll learn to evaluate LLMs by comparing evaluation approaches, understanding how individual model evaluation fits into broader AI system assessment, applying standard metrics like accuracy and perplexity, and implementing LLM-as-a-judge techniques for scalable evaluation.
5+
In this module, you'll learn to evaluate LLMs by comparing evaluation approaches, and understanding how individual model evaluation fits into broader AI system assessment. You'll also learn about standard metrics like accuracy and perplexity, and implementing LLM-as-a-judge techniques for scalable evaluation.

0 commit comments

Comments
 (0)