Skip to content

Commit 41cc83b

Browse files
authored
update: add llm options as tabs to quickstart (#2421)
1 parent 7303df1 commit 41cc83b

File tree

2 files changed

+91
-33
lines changed

2 files changed

+91
-33
lines changed

docs/getstarted/quickstart.md

Lines changed: 66 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -39,14 +39,72 @@ pip install -e .
3939

4040
## Step 3: Set Your API Key
4141

42-
Let's use OpenAI as LLM provider and set the environment variable:
42+
By default, the quickstart example uses OpenAI. Set your API key and you're ready to go. You can also use some other provider with a minor change:
4343

44-
```sh
45-
# OpenAI (default)
46-
export OPENAI_API_KEY="your-openai-key"
47-
```
44+
=== "OpenAI (Default)"
45+
```sh
46+
export OPENAI_API_KEY="your-openai-key"
47+
```
48+
49+
The quickstart project is already configured to use OpenAI. You're all set!
50+
51+
=== "Anthropic Claude"
52+
Set your Anthropic API key:
53+
54+
```sh
55+
export ANTHROPIC_API_KEY="your-anthropic-key"
56+
```
57+
58+
Then update the `_init_clients()` function in `evals.py`:
59+
60+
```python
61+
from ragas.llms import llm_factory
62+
63+
llm = llm_factory("claude-3-5-sonnet-20241022", provider="anthropic")
64+
```
65+
66+
=== "Google Gemini"
67+
Set up your Google credentials:
68+
69+
```sh
70+
export GOOGLE_API_KEY="your-google-api-key"
71+
```
72+
73+
Then update the `_init_clients()` function in `evals.py`:
74+
75+
```python
76+
from ragas.llms import llm_factory
77+
78+
llm = llm_factory("gemini-1.5-pro", provider="google")
79+
```
80+
81+
=== "Local Models (Ollama)"
82+
Install and run Ollama locally, then update the `_init_clients()` function in `evals.py`:
83+
84+
```python
85+
from ragas.llms import llm_factory
86+
87+
llm = llm_factory(
88+
"mistral",
89+
provider="ollama",
90+
base_url="http://localhost:11434" # Default Ollama URL
91+
)
92+
```
93+
94+
=== "Custom / Other Providers"
95+
For any LLM with OpenAI-compatible API:
4896

49-
If you want to use any other LLM provider, check below on how to configure that.
97+
```python
98+
from ragas.llms import llm_factory
99+
100+
llm = llm_factory(
101+
"model-name",
102+
api_key="your-api-key",
103+
base_url="https://your-api-endpoint"
104+
)
105+
```
106+
107+
For more details, learn about [LLM integrations](../concepts/metrics/index.md).
50108

51109
## Project Structure
52110

@@ -88,6 +146,8 @@ The evaluation will:
88146

89147
![](../_static/imgs/results/rag_eval_result.png)
90148

149+
Congratulations! You have a complete evaluation setup running. 🎉
150+
91151
---
92152

93153
## Customize Your Evaluation
@@ -121,30 +181,6 @@ def load_dataset():
121181
return dataset
122182
```
123183

124-
### Change the LLM Provider
125-
126-
In the `_init_clients()` function in `evals.py`, update the LLM factory call:
127-
128-
```python
129-
from ragas.llms import llm_factory
130-
131-
def _init_clients():
132-
"""Initialize OpenAI client and RAG system."""
133-
openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
134-
rag_client = default_rag_client(llm_client=openai_client)
135-
136-
# Use Anthropic Claude instead
137-
llm = llm_factory("claude-3-5-sonnet-20241022", provider="anthropic")
138-
139-
# Or use Google Gemini
140-
# llm = llm_factory("gemini-1.5-pro", provider="google")
141-
142-
# Or use local Ollama
143-
# llm = llm_factory("mistral", provider="ollama", base_url="http://localhost:11434")
144-
145-
return openai_client, rag_client, llm
146-
```
147-
148184
### Customize Dataset and RAG System
149185

150186
The template includes:

docs/howtos/integrations/_opik.md

Lines changed: 25 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -188,11 +188,33 @@ rag_pipeline("What is the capital of France?")
188188

189189

190190

191-
#### Evaluating datasets
191+
from datasets import load_dataset
192192

193-
If you looking at evaluating a dataset, you can use the Ragas `evaluate` function. When using this function, the Ragas library will compute the metrics on all the rows of the dataset and return a summary of the results.
193+
from ragas import evaluate
194+
from ragas.metrics import answer_relevancy, context_precision, faithfulness
194195

195-
You can use the OpikTracer callback to log the results of the evaluation to the Opik platform. For this we will configure the OpikTracer
196+
fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
197+
198+
# Reformat the dataset to match the schema expected by the Ragas evaluate function
199+
dataset = fiqa_eval["baseline"].select(range(3))
200+
201+
dataset = dataset.map(
202+
lambda x: {
203+
"user_input": x["question"],
204+
"reference": x["ground_truth"],
205+
"retrieved_contexts": x["contexts"],
206+
}
207+
)
208+
209+
opik_tracer_eval = OpikTracer(tags=["ragas_eval"], metadata={"evaluation_run": True})
210+
211+
result = evaluate(
212+
dataset,
213+
metrics=[context_precision, faithfulness, answer_relevancy],
214+
callbacks=[opik_tracer_eval],
215+
)
216+
217+
print(result)
196218

197219

198220
```python

0 commit comments

Comments
 (0)