Skip to content

Commit fa626a9

Browse files
authored
docs: fix Amazon bedrock info (#1551)
1 parent c1662a7 commit fa626a9

File tree

3 files changed

+82
-8
lines changed

3 files changed

+82
-8
lines changed
Lines changed: 24 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,34 @@
11

22
=== "OpenAI"
3-
This guide utilizes OpenAI for running some metrics, so ensure you have your OpenAI key ready and available in your environment.
4-
3+
Install the langchain-openai package
4+
5+
```bash
6+
pip install langchain-openai
7+
```
8+
9+
Ensure you have your OpenAI key ready and available in your environment.
10+
511
```python
612
import os
713
os.environ["OPENAI_API_KEY"] = "your-openai-key"
814
```
9-
Wrapp the LLMs in `LangchainLLMWrapper`
15+
Wrap the LLMs in `LangchainLLMWrapper` so that it can be used with ragas.
16+
1017
```python
1118
from ragas.llms import LangchainLLMWrapper
1219
from langchain_openai import ChatOpenAI
1320
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
1421
```
1522

1623

17-
=== "AWS Bedrock"
18-
First you have to set your AWS credentials and configurations
24+
=== "Amazon Bedrock"
25+
Install the langchain-aws package
26+
27+
```bash
28+
pip install langchain-aws
29+
```
30+
31+
then you have to set your AWS credentials and configurations
1932

2033
```python
2134
config = {
@@ -26,7 +39,9 @@
2639
"temperature": 0.4,
2740
}
2841
```
29-
define you LLMs
42+
43+
define you LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.
44+
3045
```python
3146
from langchain_aws import ChatBedrockConverse
3247
from ragas.llms import LangchainLLMWrapper
@@ -37,4 +52,6 @@
3752
model=config["llm"],
3853
temperature=config["temperature"],
3954
))
40-
```
55+
```
56+
57+
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
=== "OpenAI"
2+
Install the langchain-openai package
3+
4+
```bash
5+
pip install langchain-openai
6+
```
7+
8+
then ensure you have your OpenAI key ready and available in your environment
9+
10+
```python
11+
import os
12+
os.environ["OPENAI_API_KEY"] = "your-openai-key"
13+
```
14+
15+
Wrapp the LLMs in `LangchainLLMWrapper` so that it can be used with ragas.
16+
17+
```python
18+
from ragas.llms import LangchainLLMWrapper
19+
from langchain_openai import ChatOpenAI
20+
generator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
21+
```
22+
23+
24+
=== "Amazon Bedrock"
25+
Install the langchain-aws package
26+
27+
```bash
28+
pip install langchain-aws
29+
```
30+
31+
then you have to set your AWS credentials and configurations
32+
33+
```python
34+
config = {
35+
"credentials_profile_name": "your-profile-name", # E.g "default"
36+
"region_name": "your-region-name", # E.g. "us-east-1"
37+
"llm": "your-llm-model-id", # E.g "anthropic.claude-3-5-sonnet-20240620-v1:0"
38+
"embeddings": "your-embedding-model-id", # E.g "amazon.titan-embed-text-v2:0"
39+
"temperature": 0.4,
40+
}
41+
```
42+
43+
define you LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.
44+
45+
```python
46+
from langchain_aws import ChatBedrockConverse
47+
from ragas.llms import LangchainLLMWrapper
48+
generator_llm = LangchainLLMWrapper(ChatBedrockConverse(
49+
credentials_profile_name=config["credentials_profile_name"],
50+
region_name=config["region_name"],
51+
base_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
52+
model=config["llm"],
53+
temperature=config["temperature"],
54+
))
55+
```
56+
57+
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.

docs/getstarted/rag_testset_generation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ docs = loader.load()
2929

3030
You may choose to use any [LLM of your choice](../howtos/customizations/customize_models.md)
3131
--8<--
32-
choose_evaluvator_llm.md
32+
choose_generator_llm.md
3333
--8<--
3434

3535
### Generate Testset

0 commit comments

Comments
 (0)