Skip to content

Commit be6c7ef

Browse files
Update README.md
1 parent 7b26e19 commit be6c7ef

File tree

1 file changed

+35
-19
lines changed

1 file changed

+35
-19
lines changed

README.md

Lines changed: 35 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -87,42 +87,57 @@ conda install -c conda-forge evidently
8787

8888
> This is a simple Hello World. Check the Tutorials for more: [LLM evaluation](https://docs.evidentlyai.com/quickstart_llm).
8989
90-
Import the Report, evaluation Preset and toy tabular dataset.
90+
Import the necessary components:
9191

9292
```python
9393
import pandas as pd
94-
from sklearn import datasets
95-
94+
from evidently.future.datasets import Dataset, DataDefinition, Descriptor
95+
from evidently.future.descriptors import Sentiment, TextLength, Contains
9696
from evidently.future.report import Report
97-
from evidently.future.presets import DataDriftPreset
97+
from evidently.future.presets import TextEvals
98+
```
9899

99-
iris_data = datasets.load_iris(as_frame=True)
100-
iris_frame = iris_data.frame
100+
Create a toy dataset with questions and answers.
101+
102+
```python
103+
eval_df = pd.DataFrame([
104+
["What is the capital of Japan?", "The capital of Japan is Tokyo."],
105+
["Who painted the Mona Lisa?", "Leonardo da Vinci."],
106+
["Can you write an essay?", "I'm sorry, but I can't assist with homework."]],
107+
columns=["question", "answer"])
101108
```
102109

103-
Run the **Data Drift** evaluation preset that will test for shift in column distributions. Take the first 60 rows of the dataframe as "current" data and the following as reference. Get the output in Jupyter notebook:
110+
Create an Evidently Dataset object and add `descriptors`: row-level evaluators. We'll check for sentiment of each response, its length and whether it contains words indicative of denial.
104111

105112
```python
106-
report = Report([
107-
DataDriftPreset(method="psi")
108-
],
109-
include_tests="True")
110-
my_eval = report.run(iris_frame.iloc[:60], iris_frame.iloc[60:])
111-
my_eval
113+
eval_dataset = Dataset.from_pandas(pd.DataFrame(eval_df),
114+
data_definition=DataDefinition(),
115+
descriptors=[
116+
Sentiment("answer", alias="Sentiment"),
117+
TextLength("answer", alias="Length"),
118+
Contains("answer", items=['sorry', 'apologize'], mode="any", alias="Denials")
119+
])
112120
```
113121

114-
You can also save an HTML file. You'll need to open it from the destination folder.
122+
You can view the dataframe with added scores:
115123

116124
```python
117-
my_eval.save_html("file.html")
125+
eval_dataset.as_dataframe()
118126
```
119127

120-
To get the output as JSON or Python dictionary:
128+
To get a summary Report to see the distribution of scores:
129+
121130
```python
122-
my_eval.json()
131+
report = Report([
132+
TextEvals()
133+
])
134+
135+
my_eval = report.run(eval_dataset)
136+
my_eval
137+
# my_eval.json()
123138
# my_eval.dict()
124139
```
125-
You can choose other Presets, create Reports from indiviudal Metrics and configure pass/fail conditions.
140+
You can also choose other evaluators, including LLM-as-a-judge and configure pass/fail conditions.
126141

127142
### Data and ML evals
128143

@@ -166,7 +181,8 @@ my_eval.json()
166181
You can choose other Presets, create Reports from indiviudal Metrics and configure pass/fail conditions.
167182

168183
## Monitoring dashboard
169-
> This launches a demo project in the Evidently UI. Check tutorials for [Self-hosting](https://docs.evidentlyai.com/tutorials-and-examples/tutorial-monitoring) or [Evidently Cloud](https://docs.evidentlyai.com/tutorials-and-examples/tutorial-cloud).
184+
185+
> This launches a demo project in the locally hosted Evidently UI. Sign up for [Evidently Cloud](https://docs.evidentlyai.com/docs/setup/cloud) to instantly get a managed version with additional features..
170186
171187
Recommended step: create a virtual environment and activate it.
172188
```

0 commit comments

Comments
 (0)