Skip to content

Commit f166767

Browse files
authored
feat: move experiment from experimental to ragas main (#2175)
Done --- - [x] This is a cascading PR i.e. you're expected to review and merge #2174 before looking at this. Changes --- 1. Moved and renamed the experiment files, relevant imports in examples and docs 2. Expanded ragas current utils.py implementation to support behaviour from experimental utils 3. Added some tests which I deemed useful
1 parent 7d97288 commit f166767

File tree

17 files changed

+693
-21
lines changed

17 files changed

+693
-21
lines changed

.github/workflows/ci.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ jobs:
118118
# Use pytest-xdist to improve test run-time on Linux/macOS
119119
OPTS=(--dist loadfile -n auto)
120120
fi
121-
121+
122122
# Run different test suites based on test type
123123
if [ "${{ matrix.test-type }}" = "full" ]; then
124124
# Full test suite with notebook tests

CLAUDE.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,8 @@ The experimental features are now integrated into the main ragas package:
161161

162162
To use experimental features:
163163
```python
164-
from ragas.experimental import Dataset, experiment
164+
from ragas.experimental import Dataset
165+
from ragas import experiment
165166
from ragas.backends import get_registry
166167
```
167168

docs/experimental/core_concepts/experimentation.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Running an experiment involves:
3939
The `@experiment` decorator in Ragas simplifies the orchestration, scaling, and storage of experiments. Here's an example:
4040

4141
```python
42-
from ragas_experimental import experiment
42+
from ragas import experiment
4343

4444
# Define your metric and dataset
4545
my_metric = ...
@@ -115,7 +115,8 @@ Here's a complete example showing how to pass different LLM models to your exper
115115

116116
```python
117117
from pydantic import BaseModel
118-
from ragas.experimental import experiment, Dataset
118+
from ragas.experimental import Dataset
119+
from ragas import experiment
119120

120121
class ExperimentResult(BaseModel):
121122
query: str

docs/experimental/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,8 @@ pip install ragas-experimental && pip install "ragas-experimental[local]"
3939

4040
```python
4141
import numpy as np
42-
from ragas_experimental import experiment, Dataset
42+
from ragas_experimental import Dataset
43+
from ragas import experiment
4344
from ragas_experimental.metrics import MetricResult, discrete_metric
4445

4546
# Define a custom metric for accuracy

docs/experimental/tutorials/agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def correctness_metric(prediction: float, actual: float):
5656
Next, we will write the experiment loop that will run our agent on the test dataset and evaluate it using the metric, and store the results in a CSV file.
5757

5858
```python
59-
from ragas_experimental import experiment
59+
from ragas import experiment
6060

6161
@experiment()
6262
async def run_experiment(row):

docs/experimental/tutorials/prompt.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ def my_metric(prediction: str, actual: str):
5555
Next, we will write the experiment loop that will run our prompt on the test dataset and evaluate it using the metric, and store the results in a csv file.
5656

5757
```python
58-
from ragas_experimental import experiment
58+
from ragas import experiment
5959

6060
@experiment()
6161
async def run_experiment(row):

docs/experimental/tutorials/workflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ my_metric = DiscreteMetric(
4949
Next, we will write the evaluation experiment loop that will run our workflow on the test dataset and evaluate it using the metric, and store the results in a CSV file.
5050

5151
```python
52-
from ragas_experimental import experiment
52+
from ragas import experiment
5353

5454
@experiment()
5555
async def run_experiment(row):

examples/agent_evals/evals.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
from ragas.experimental import Dataset, experiment
1+
from ragas.experimental import Dataset
2+
from ragas import experiment
23
from ragas.experimental.metrics.numeric import numeric_metric
34
from ragas.experimental.metrics.result import MetricResult
45
from agent import get_default_agent

examples/prompt_evals/evals.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
from ragas.experimental import Dataset, experiment
1+
from ragas.experimental import Dataset
2+
from ragas import experiment
23
from ragas.experimental.metrics.result import MetricResult
34
from ragas.experimental.metrics.discrete import discrete_metric
45

examples/rag_eval/evals.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
from ragas.experimental import Dataset, experiment
1+
from ragas.experimental import Dataset
2+
from ragas import experiment
23
from ragas.experimental.metrics import DiscreteMetric
34
from openai import OpenAI
45
from ragas.experimental.llms import llm_factory

0 commit comments

Comments
 (0)