Skip to content

Commit 05addb2

Browse files
authored
feat(docs): add support for reusable snippets (#1414)
- Replaced LLM selection blocks with `choose_evaluvator_llm.md` snippet. - Simplifies updates and reuse across documentation.
1 parent 986ded7 commit 05addb2

File tree

4 files changed

+51
-83
lines changed

4 files changed

+51
-83
lines changed
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
2+
=== "OpenAI"
3+
This guide utilizes OpenAI for running some metrics, so ensure you have your OpenAI key ready and available in your environment.
4+
5+
```python
6+
import os
7+
os.environ["OPENAI_API_KEY"] = "your-openai-key"
8+
```
9+
Wrapp the LLMs in `LangchainLLMWrapper`
10+
```python
11+
from ragas.llms import LangchainLLMWrapper
12+
from langchain_openai import ChatOpenAI
13+
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
14+
```
15+
16+
17+
=== "AWS Bedrock"
18+
First you have to set your AWS credentials and configurations
19+
20+
```python
21+
config = {
22+
"credentials_profile_name": "your-profile-name", # E.g "default"
23+
"region_name": "your-region-name", # E.g. "us-east-1"
24+
"model_id": "your-model-id", # E.g "anthropic.claude-v2"
25+
"model_kwargs": {"temperature": 0.4},
26+
}
27+
```
28+
define you LLMs
29+
```python
30+
from langchain_aws.chat_models import BedrockChat
31+
from ragas.llms import LangchainLLMWrapper
32+
evaluator_llm = LangchainLLMWrapper(BedrockChat(
33+
credentials_profile_name=config["credentials_profile_name"],
34+
region_name=config["region_name"],
35+
endpoint_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
36+
model_id=config["model_id"],
37+
model_kwargs=config["model_kwargs"],
38+
))
39+
```

docs/getstarted/rag_evaluation.md

Lines changed: 4 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -41,42 +41,10 @@ Since all of the metrics we have chosen are LLM-based metrics, we need to choose
4141

4242
### Choosing evaluator LLM
4343

44-
=== "OpenAI"
45-
This guide utilizes OpenAI for running some metrics, so ensure you have your OpenAI key ready and available in your environment.
46-
```python
47-
import os
48-
os.environ["OPENAI_API_KEY"] = "your-openai-key"
49-
```
50-
Wrapp the LLMs in `LangchainLLMWrapper`
51-
```python
52-
from ragas.llms import LangchainLLMWrapper
53-
from langchain_openai import ChatOpenAI
54-
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
55-
```
56-
57-
58-
=== "AWS Bedrock"
59-
First you have to set your AWS credentials and configurations
60-
```python
61-
config = {
62-
"credentials_profile_name": "your-profile-name", # E.g "default"
63-
"region_name": "your-region-name", # E.g. "us-east-1"
64-
"model_id": "your-model-id", # E.g "anthropic.claude-v2"
65-
"model_kwargs": {"temperature": 0.4},
66-
}
67-
```
68-
define you LLMs
69-
```python
70-
from langchain_aws.chat_models import BedrockChat
71-
from ragas.llms import LangchainLLMWrapper
72-
evaluator_llm = LangchainLLMWrapper(BedrockChat(
73-
credentials_profile_name=config["credentials_profile_name"],
74-
region_name=config["region_name"],
75-
endpoint_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
76-
model_id=config["model_id"],
77-
model_kwargs=config["model_kwargs"],
78-
))
79-
```
44+
--8<--
45+
choose_evaluvator_llm.md
46+
--8<--
47+
8048

8149
### Running Evaluation
8250

docs/getstarted/rag_testset_generation.md

Lines changed: 6 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ For the sake of this tutorial we will use sample documents from this [repository
1010
git clone https://huggingface.co/datasets/explodinggradients/Sample_Docs_Markdown
1111
```
1212

13-
1413
### Load documents
14+
1515
Now we will load the documents from the sample dataset using `DirectoryLoader`, which is one of document loaders from [langchain_community](https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/). You may also use any loaders from [llama_index](https://docs.llamaindex.ai/en/stable/understanding/loading/llamahub/)
1616

1717
```python
@@ -22,52 +22,12 @@ loader = DirectoryLoader(path, glob="**/*.md")
2222
docs = loader.load()
2323
```
2424

25-
2625
### Choose your LLM
2726

2827
You may choose to use any [LLM of your choice]()
29-
30-
=== "OpenAI"
31-
This guide utilizes OpenAI for running some metrics, so ensure you have your OpenAI key ready and available in your environment.
32-
33-
```python
34-
import os
35-
os.environ["OPENAI_API_KEY"] = "your-openai-key"
36-
```
37-
Wrapp the LLMs in `LangchainLLMWrapper`
38-
```python
39-
from ragas.llms import LangchainLLMWrapper
40-
from langchain_openai import ChatOpenAI
41-
generator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o"))
42-
```
43-
44-
45-
46-
47-
=== "AWS Bedrock"
48-
First you have to set your AWS credentials and configurations
49-
50-
```python
51-
config = {
52-
"credentials_profile_name": "your-profile-name", # E.g "default"
53-
"region_name": "your-region-name", # E.g. "us-east-1"
54-
"model_id": "your-model-id", # E.g "anthropic.claude-v2"
55-
"model_kwargs": {"temperature": 0.4},
56-
}
57-
```
58-
define you LLMs
59-
```python
60-
from langchain_aws.chat_models import BedrockChat
61-
from ragas.llms import LangchainLLMWrapper
62-
generator_llm = LangchainLLMWrapper(BedrockChat(
63-
credentials_profile_name=config["credentials_profile_name"],
64-
region_name=config["region_name"],
65-
endpoint_url=f"https://bedrock-runtime.{config['region_name']}.amazonaws.com",
66-
model_id=config["model_id"],
67-
model_kwargs=config["model_kwargs"],
68-
))
69-
```
70-
28+
--8<--
29+
choose_evaluvator_llm.md
30+
--8<--
7131

7232
### Generate Testset
7333

@@ -83,7 +43,7 @@ dataset = generator.generate_with_langchain_docs(docs, test_size=10)
8343
### Export
8444

8545
You may now export and inspect the generated testset.
86-
46+
8747
```python
8848
dataset.to_pandas()
89-
```
49+
```

mkdocs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,6 @@ markdown_extensions:
102102
pygments_lang_class: true
103103
- admonition
104104
- pymdownx.inlinehilite
105-
- pymdownx.snippets
106105
- pymdownx.details
107106
- pymdownx.tabbed:
108107
alternate_style: true
@@ -118,6 +117,8 @@ markdown_extensions:
118117
- name: mermaid
119118
class: mermaid
120119
format: !!python/name:pymdownx.superfences.fence_code_format
120+
- pymdownx.snippets:
121+
base_path: ['./docs/extra/components/']
121122

122123
# Extra CSS
123124
extra_css:

0 commit comments

Comments
 (0)