Skip to content

Commit c3a1831

Browse files
authored
docs: added azure openai to default docs (#1610)
1 parent b1d71d5 commit c3a1831

File tree

2 files changed

+145
-2
lines changed

2 files changed

+145
-2
lines changed

docs/extra/components/choose_evaluator_llm.md

Lines changed: 73 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,4 +65,76 @@
6565
))
6666
```
6767

68-
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
68+
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
69+
70+
=== "Azure OpenAI"
71+
Install the langchain-openai package
72+
73+
```bash
74+
pip install langchain-openai
75+
```
76+
77+
Ensure you have your Azure OpenAI key ready and available in your environment.
78+
79+
```python
80+
import os
81+
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-openai-key"
82+
83+
# other configuration
84+
azure_config = {
85+
"base_url": "", # your endpoint
86+
"model_deployment": "", # your model deployment name
87+
"model_name": "", # your model name
88+
"embedding_deployment": "", # your embedding deployment name
89+
"embedding_name": "", # your embedding name
90+
}
91+
92+
```
93+
94+
Define your LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.
95+
96+
```python
97+
from langchain_openai import AzureChatOpenAI
98+
from langchain_openai import AzureOpenAIEmbeddings
99+
from ragas.llms import LangchainLLMWrapper
100+
from ragas.embeddings import LangchainEmbeddingsWrapper
101+
evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI(
102+
openai_api_version="2023-05-15",
103+
azure_endpoint=azure_configs["base_url"],
104+
azure_deployment=azure_configs["model_deployment"],
105+
model=azure_configs["model_name"],
106+
validate_base_url=False,
107+
))
108+
109+
# init the embeddings for answer_relevancy, answer_correctness and answer_similarity
110+
evaluator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings(
111+
openai_api_version="2023-05-15",
112+
azure_endpoint=azure_configs["base_url"],
113+
azure_deployment=azure_configs["embedding_deployment"],
114+
model=azure_configs["embedding_name"],
115+
))
116+
```
117+
118+
If you want more information on how to use other Azure services, please refer to the [langchain-azure](https://python.langchain.com/docs/integrations/chat/azure_chat_openai/) documentation.
119+
120+
121+
=== "Others"
122+
If you are using a different LLM provider and using Langchain to interact with it, you can wrap your LLM in `LangchainLLMWrapper` so that it can be used with ragas.
123+
124+
```python
125+
from ragas.llms import LangchainLLMWrapper
126+
evaluator_llm = LangchainLLMWrapper(your_llm_instance)
127+
```
128+
129+
For a more detailed guide, checkout [the guide on customizing models](../../howtos/customizations/customize_models/).
130+
131+
If you using LlamaIndex, you can use the `LlamaIndexLLMWrapper` to wrap your LLM so that it can be used with ragas.
132+
133+
```python
134+
from ragas.llms import LlamaIndexLLMWrapper
135+
evaluator_llm = LlamaIndexLLMWrapper(your_llm_instance)
136+
```
137+
138+
For more information on how to use LlamaIndex, please refer to the [LlamaIndex Integration guide](../../howtos/integrations/_llamaindex/).
139+
140+
If your still not able use Ragas with your favorite LLM provider, please let us know by by commenting on this [issue](https://github.com/explodinggradients/ragas/issues/1617) and we'll add support for it 🙂.

docs/extra/components/choose_generator_llm.md

Lines changed: 72 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,75 @@
6464
))
6565
```
6666

67-
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
67+
If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation.
68+
69+
=== "Azure OpenAI"
70+
Install the langchain-openai package
71+
72+
```bash
73+
pip install langchain-openai
74+
```
75+
76+
Ensure you have your Azure OpenAI key ready and available in your environment.
77+
78+
```python
79+
import os
80+
os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-openai-key"
81+
82+
# other configuration
83+
azure_config = {
84+
"base_url": "", # your endpoint
85+
"model_deployment": "", # your model deployment name
86+
"model_name": "", # your model name
87+
"embedding_deployment": "", # your embedding deployment name
88+
"embedding_name": "", # your embedding name
89+
}
90+
91+
```
92+
93+
Define your LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas.
94+
95+
```python
96+
from langchain_openai import AzureChatOpenAI
97+
from langchain_openai import AzureOpenAIEmbeddings
98+
from ragas.llms import LangchainLLMWrapper
99+
from ragas.embeddings import LangchainEmbeddingsWrapper
100+
generator_llm = LangchainLLMWrapper(AzureChatOpenAI(
101+
openai_api_version="2023-05-15",
102+
azure_endpoint=azure_configs["base_url"],
103+
azure_deployment=azure_configs["model_deployment"],
104+
model=azure_configs["model_name"],
105+
validate_base_url=False,
106+
))
107+
108+
# init the embeddings for answer_relevancy, answer_correctness and answer_similarity
109+
generator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings(
110+
openai_api_version="2023-05-15",
111+
azure_endpoint=azure_configs["base_url"],
112+
azure_deployment=azure_configs["embedding_deployment"],
113+
model=azure_configs["embedding_name"],
114+
))
115+
```
116+
117+
If you want more information on how to use other Azure services, please refer to the [langchain-azure](https://python.langchain.com/docs/integrations/chat/azure_chat_openai/) documentation.
118+
119+
=== "Others"
120+
If you are using a different LLM provider and using Langchain to interact with it, you can wrap your LLM in `LangchainLLMWrapper` so that it can be used with ragas.
121+
122+
```python
123+
from ragas.llms import LangchainLLMWrapper
124+
generator_llm = LangchainLLMWrapper(your_llm_instance)
125+
```
126+
127+
For a more detailed guide, checkout [the guide on customizing models](../../howtos/customizations/customize_models/).
128+
129+
If you using LlamaIndex, you can use the `LlamaIndexLLMWrapper` to wrap your LLM so that it can be used with ragas.
130+
131+
```python
132+
from ragas.llms import LlamaIndexLLMWrapper
133+
generator_llm = LlamaIndexLLMWrapper(your_llm_instance)
134+
```
135+
136+
For more information on how to use LlamaIndex, please refer to the [LlamaIndex Integration guide](../../howtos/integrations/_llamaindex/).
137+
138+
If your still not able use Ragas with your favorite LLM provider, please let us know by by commenting on this [issue](https://github.com/explodinggradients/ragas/issues/1617) and we'll add support for it 🙂.

0 commit comments

Comments
 (0)