diff --git a/docs/concepts/experimentation.md b/docs/concepts/experimentation.md
index 1d1b5450b..62632cc1b 100644
--- a/docs/concepts/experimentation.md
+++ b/docs/concepts/experimentation.md
@@ -36,7 +36,7 @@ graph LR
## Creating Experiments with Ragas
-Ragas provides an `@experiment` decorator to streamline the experiment creation process. If you prefer a hands-on intro first, see [Run your first experiment](../getstarted/experiments_quickstart.md).
+Ragas provides an `@experiment` decorator to streamline the experiment creation process. If you prefer a hands-on intro first, see the [Quick Start guide](../getstarted/quickstart.md).
### Basic Experiment Structure
diff --git a/docs/concepts/index.md b/docs/concepts/index.md
index 92405e5e0..285240ca7 100644
--- a/docs/concepts/index.md
+++ b/docs/concepts/index.md
@@ -3,41 +3,36 @@
-- :material-widgets:{ .lg .middle } [__Components Guides__](components/index.md)
+- :material-flask-outline:{ .lg .middle } [__Experimentation__](experimentation.md)
---
- Discover the various components used within Ragas.
-
- Components like [Prompt Object](components/prompt.md), [Evaluation Dataset](components/eval_dataset.md) and [more..](components/index.md)
+ Learn how to systematically evaluate your AI applications using experiments.
+ Track changes, measure improvements, and compare results across different versions of your application.
-- ::material-ruler-square:{ .lg .middle } [__Ragas Metrics__](metrics/index.md)
+- :material-database-export:{ .lg .middle } [__Datasets__](datasets.md)
---
- Explore available metrics and understand how they work.
+ Understand how to create, manage, and use evaluation datasets.
- Metrics for evaluating [RAG](metrics/available_metrics/index.md#retrieval-augmented-generation), [Agentic workflows](metrics/available_metrics/index.md#agents-or-tool-use-cases) and [more..](metrics/available_metrics/index.md#list-of-available-metrics).
+ Learn about dataset structure, storage backends, and best practices for maintaining your test data.
-- :material-database-plus:{ .lg .middle } [__Test Data Generation__](test_data_generation/index.md)
+- ::material-ruler-square:{ .lg .middle } [__Ragas Metrics__](metrics/index.md)
---
- Generate high-quality datasets for comprehensive testing.
-
- Algorithms for synthesizing data to test [RAG](test_data_generation/rag.md), [Agentic workflows](test_data_generation/agents.md)
+ Use our library of [available metrics](metrics/available_metrics/index.md) or create [custom metrics](metrics/overview/index.md) tailored to your use case.
+ Metrics for evaluating [RAG](metrics/available_metrics/index.md#retrieval-augmented-generation), [Agentic workflows](metrics/available_metrics/index.md#agents-or-tool-use-cases) and [more..](metrics/available_metrics/index.md#list-of-available-metrics).
-- :material-chart-box-outline:{ .lg .middle } [__Feedback Intelligence__](feedback/index.md)
+- :material-database-plus:{ .lg .middle } [__Test Data Generation__](test_data_generation/index.md)
---
- Leverage signals from production data to gain actionable insights.
-
- Learn about to leveraging implicit and explicit signals from production data.
-
-
+ Generate high-quality datasets for comprehensive testing.
+ Algorithms for synthesizing data to test [RAG](test_data_generation/rag.md), [Agentic workflows](test_data_generation/agents.md)
diff --git a/docs/getstarted/index.md b/docs/getstarted/index.md
index 0931d1ea7..2cff9430c 100644
--- a/docs/getstarted/index.md
+++ b/docs/getstarted/index.md
@@ -1,18 +1,25 @@
# π Get Started
-Welcome to Ragas! If you're new to Ragas, the Get Started guides will walk you through the fundamentals of working with Ragas. These tutorials assume basic knowledge of Python and building LLM application pipelines.
+Welcome to Ragas! The Get Started guides will walk you through the fundamentals of working with Ragas. These tutorials assume basic knowledge of Python and building LLM application pipelines.
Before you proceed further, ensure that you have [Ragas installed](./install.md)!
!!! note
- The tutorials only provide an overview of what you can accomplish with Ragas and the basic skills needed to utilize it effectively. For an in-depth explanation of the core concepts behind Ragas, check out the [Core Concepts](../concepts/index.md) page. You can also explore the [How-to Guides](../howtos/index.md) for specific applications of Ragas.
+ The tutorials provide an overview of what you can accomplish with Ragas and the basic skills needed to utilize it effectively. For an in-depth explanation of the core concepts behind Ragas, check out the [Core Concepts](../concepts/index.md) page. You can also explore the [How-to Guides](../howtos/index.md) for specific applications of Ragas.
-If you have any questions about Ragas, feel free to join and ask in the `#questions` channel in our Discord community.
+If you have any questions about Ragas, feel free to join our [Discord community](../community/index.md) and ask in the `#questions` channel.
-Let's get started!
+## Quickstart
-- [Quick Start: Get Running in 5 Minutes](./quickstart.md)
-- [Evaluate your first AI app](./evals.md)
-- [Run ragas metrics for evaluating RAG](rag_eval.md)
-- [Generate test data for evaluating RAG](rag_testset_generation.md)
-- [Run your first experiment](experiments_quickstart.md)
+Start here to get up and running with Ragas in minutes:
+
+- [Quick Start: Get Running in 5 Minutes](./quickstart.md)
+
+## Tutorials
+
+Learn how to evaluate different types of AI applications:
+
+- [Evaluate a prompt](../tutorials/prompt.md) - Test and compare different prompts
+- [Evaluate a simple RAG system](../tutorials/rag.md) - Evaluate a RAG application
+- [Evaluate an AI Workflow](../tutorials/workflow.md) - Evaluate multi-step workflows
+- [Evaluate an AI Agent](../tutorials/agent.md) - Evaluate agentic applications
diff --git a/docs/index.md b/docs/index.md
index 511bb9977..c2c073ad3 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,121 +1,52 @@
# β¨ Introduction
-Ragas is a library that provides tools to supercharge the evaluation of Large Language Model (LLM) applications. It is designed to help you evaluate your LLM applications with ease and confidence.
+Ragas is a library that helps you move from "vibe checks" to systematic evaluation loops for your AI applications. It provides tools to supercharge the evaluation of Large Language Model (LLM) applications, enabling you to evaluate your LLM applications with ease and confidence.
+## Why Ragas?
+Traditional evaluation metrics don't capture what matters for LLM applications. Manual evaluation doesn't scale. Ragas solves this by combining **LLM-driven metrics** with **systematic experimentation** to create a continuous improvement loop.
+
+### Key Features
+
+- **Experiments-first approach**: Evaluate changes consistently with `experiments`. Make changes, run evaluations, observe results, and iterate to improve your LLM application.
+
+- **Ragas Metrics**: Create custom metrics tailored to your specific use case with simple decorators or use our library of [available metrics](./concepts/metrics/available_metrics/index.md). Learn more about [metrics in Ragas](./concepts/metrics/overview/index.md).
+
+- **Easy to integrate**: Built-in dataset management, result tracking, and integration with popular frameworks like LangChain, LlamaIndex, and more.
- π **Get Started**
- Install with `pip` and get started with Ragas with these tutorials.
+ Start evaluating in 5 minutes with our quickstart guide.
- [:octicons-arrow-right-24: Get Started](getstarted/evals.md)
+ [:octicons-arrow-right-24: Get Started](getstarted/quickstart.md)
- π **Core Concepts**
- In depth explanation and discussion of the concepts and working of different features available in Ragas.
+ Understand experiments, metrics, and datasetsβthe building blocks of effective evaluation.
[:octicons-arrow-right-24: Core Concepts](./concepts/index.md)
- π οΈ **How-to Guides**
- Practical guides to help you achieve a specific goals. Take a look at these
- guides to learn how to use Ragas to solve real-world problems.
+ Integrate Ragas into your workflow with practical guides for specific use cases.
[:octicons-arrow-right-24: How-to Guides](./howtos/index.md)
- π **References**
- Technical descriptions of how Ragas classes and methods work.
+ API documentation and technical details for diving deeper.
[:octicons-arrow-right-24: References](./references/index.md)
+## Want help improving your AI application using evals?
+In the past 2 years, we have seen and helped improve many AI applications using evals.
+We are compressing this knowledge into a product to replace vibe checks with eval loops so that you can focus on building great AI applications.
-## Frequently Asked Questions
-
-β What is the best open-source model to use?
-
- There isn't a single correct answer to this question. With the rapid pace of AI model development, new open-source models are released every week, often claiming to outperform previous versions. The best model for your needs depends largely on your GPU capacity and the type of data you're working with.
-
- It's a good idea to explore newer, widely accepted models with strong general capabilities. You can refer to
this list for available open-source models, their release dates, and fine-tuned variants.
-
-
-β Why do NaN values appear in evaluation results?
-
- NaN stands for "Not a Number." In ragas evaluation results, NaN can appear for two main reasons:
-
- - JSON Parsing Issue: The model's output is not JSON-parsable. ragas requires models to output JSON-compatible responses because all prompts are structured using Pydantic. This ensures efficient parsing of LLM outputs.
- - Non-Ideal Cases for Scoring: Certain cases in the sample may not be ideal for scoring. For example, scoring the faithfulness of a response like "I don't know" might not be appropriate.
-
-
-
-β How can I make evaluation results more explainable?
-
- The best way is to trace and log your evaluation, then inspect the results using LLM traces. You can follow a detailed example of this process
here.
-
-
-
+If you want help with improving and scaling up your AI application using evals, π Book a [slot](https://bit.ly/3EBYq4J) or drop us a line: [founders@explodinggradients.com](mailto:founders@explodinggradients.com).
diff --git a/mkdocs.yml b/mkdocs.yml
index 673f45b0c..4224f5d6b 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -13,24 +13,15 @@ nav:
- getstarted/index.md
- Quick Start: getstarted/quickstart.md
- Installation: getstarted/install.md
- - Evaluate your first LLM App: getstarted/evals.md
- - Evaluate a simple RAG: getstarted/rag_eval.md
- - Generate Synthetic Testset for RAG: getstarted/rag_testset_generation.md
- - Experiments:
- - Run your first experiment: getstarted/experiments_quickstart.md
+ - Tutorials:
- Evaluate a prompt: tutorials/prompt.md
- Evaluate a simple RAG system: tutorials/rag.md
- Evaluate an AI Workflow: tutorials/workflow.md
- Evaluate an AI Agent: tutorials/agent.md
- π Core Concepts:
- concepts/index.md
- - Components:
- - concepts/components/index.md
- - General:
- - Prompt: concepts/components/prompt.md
- - Evaluation:
- - Evaluation Sample: concepts/components/eval_sample.md
- - Evaluation Dataset: concepts/components/eval_dataset.md
+ - Experimentation: concepts/experimentation.md
+ - Datasets: concepts/datasets.md
- Metrics:
- concepts/metrics/index.md
- Overview: concepts/metrics/overview/index.md
@@ -84,10 +75,13 @@ nav:
- Scenario Generation: concepts/test_data_generation/rag/#scenario-generation
- Agents or tool use:
- concepts/test_data_generation/agents.md
- - Feedback Intelligence:
- - concepts/feedback/index.md
- - Datasets: concepts/datasets.md
- - Experimentation: concepts/experimentation.md
+ - Components:
+ - concepts/components/index.md
+ - General:
+ - Prompt: concepts/components/prompt.md
+ - Evaluation:
+ - Evaluation Sample: concepts/components/eval_sample.md
+ - Evaluation Dataset: concepts/components/eval_dataset.md
- π οΈ How-to Guides:
- howtos/index.md