Skip to content

Commit 30e389b

Browse files
authored
docs: make llms and embeddings explicit (#1553)
1 parent 6e225ca commit 30e389b

File tree

4 files changed

+33
-8
lines changed

4 files changed

+33
-8
lines changed

docs/extra/components/choose_evaluvator_llm.md renamed to docs/extra/components/choose_evaluator_llm.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,11 @@
1616

1717
```python
1818
from ragas.llms import LangchainLLMWrapper
19+
from ragas.embeddings import LangchainEmbeddingsWrapper
1920
from langchain_openai import ChatOpenAI
21+
from langchain_openai import OpenAIEmbeddings
2022
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
23+
evaluator_embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
2124
```
2225

2326

@@ -44,14 +47,22 @@
4447

4548
```python
4649
from langchain_aws import ChatBedrockConverse
50+
from langchain_aws import BedrockEmbeddings
4751
from ragas.llms import LangchainLLMWrapper
52+
from ragas.embeddings import LangchainEmbeddingsWrapper
53+
4854
evaluator_llm = LangchainLLMWrapper(ChatBedrockConverse(
4955
credentials_profile_name=config["credentials_profile_name"],
5056
region_name=config["region_name"],
5157
base_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
5258
model=config["llm"],
5359
temperature=config["temperature"],
5460
))
61+
evaluator_embeddings = LangchainEmbeddingsWrapper(BedrockEmbeddings(
62+
credentials_profile_name=config["credentials_profile_name"],
63+
region_name=config["region_name"],
64+
model_id=config["embeddings"],
65+
))
5566
```
5667

5768
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.

docs/extra/components/choose_generator_llm.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,9 @@
1717
```python
1818
from ragas.llms import LangchainLLMWrapper
1919
from langchain_openai import ChatOpenAI
20+
from langchain_openai import OpenAIEmbeddings
2021
generator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
22+
generator_embeddings = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
2123
```
2224

2325

@@ -44,14 +46,22 @@
4446

4547
```python
4648
from langchain_aws import ChatBedrockConverse
49+
from langchain_aws import BedrockEmbeddings
4750
from ragas.llms import LangchainLLMWrapper
51+
from ragas.embeddings import LangchainEmbeddingsWrapper
52+
4853
generator_llm = LangchainLLMWrapper(ChatBedrockConverse(
4954
credentials_profile_name=config["credentials_profile_name"],
5055
region_name=config["region_name"],
5156
base_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
5257
model=config["llm"],
5358
temperature=config["temperature"],
5459
))
60+
generator_embeddings = LangchainEmbeddingsWrapper(BedrockEmbeddings(
61+
credentials_profile_name=config["credentials_profile_name"],
62+
region_name=config["region_name"],
63+
model_id=config["embeddings"],
64+
))
5565
```
5666

5767
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.

docs/getstarted/rag_evaluation.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,20 @@ Since all of the metrics we have chosen are LLM-based metrics, we need to choose
3333
### Choosing evaluator LLM
3434

3535
--8<--
36-
choose_evaluvator_llm.md
36+
choose_evaluator_llm.md
3737
--8<--
3838

3939

4040
### Running Evaluation
4141

4242
```python
43-
metrics = [LLMContextRecall(), FactualCorrectness(), Faithfulness()]
44-
results = evaluate(dataset=eval_dataset, metrics=metrics, llm=evaluator_llm,)
43+
metrics = [
44+
LLMContextRecall(llm=evaluator_llm),
45+
FactualCorrectness(llm=evaluator_llm),
46+
Faithfulness(llm=evaluator_llm),
47+
SemanticSimilarity(embeddings=evaluator_embeddings)
48+
]
49+
results = evaluate(dataset=eval_dataset, metrics=metrics)
4550
```
4651

4752
### Exporting and analyzing results

docs/getstarted/rag_testset_generation.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -101,12 +101,11 @@ But you can mix and match transforms or build your own as needed.
101101
```python
102102
from ragas.testset.transforms import default_transforms
103103

104-
# choose your LLM and Embedding Model
105-
from ragas.llms import llm_factory
106-
from ragas.embeddings import embedding_factory
107104

108-
transformer_llm = llm_factory("gpt-4o")
109-
embedding_model = embedding_factory("text-embedding-3-large")
105+
# define your LLM and Embedding Model
106+
# here we are using the same LLM and Embedding Model that we used to generate the testset
107+
transformer_llm = generator_llm
108+
embedding_model = generator_embeddings
110109

111110
trans = default_transforms(llm=transformer_llm, embedding_model=embedding_model)
112111
apply_transforms(kg, trans)

0 commit comments

Comments
 (0)