|
65 | 65 | )) |
66 | 66 | ``` |
67 | 67 |
|
68 | | - If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation. |
| 68 | + If you want more information on how to use other AWS services, please refer to the [langchain-aws](https://python.langchain.com/docs/integrations/providers/aws/) documentation. |
| 69 | + |
| 70 | +=== "Azure OpenAI" |
| 71 | + Install the langchain-openai package |
| 72 | + |
| 73 | + ```bash |
| 74 | + pip install langchain-openai |
| 75 | + ``` |
| 76 | + |
| 77 | + Ensure you have your Azure OpenAI key ready and available in your environment. |
| 78 | + |
| 79 | + ```python |
| 80 | + import os |
| 81 | + os.environ["AZURE_OPENAI_API_KEY"] = "your-azure-openai-key" |
| 82 | + |
| 83 | + # other configuration |
| 84 | + azure_config = { |
| 85 | + "base_url": "", # your endpoint |
| 86 | + "model_deployment": "", # your model deployment name |
| 87 | + "model_name": "", # your model name |
| 88 | + "embedding_deployment": "", # your embedding deployment name |
| 89 | + "embedding_name": "", # your embedding name |
| 90 | + } |
| 91 | + |
| 92 | + ``` |
| 93 | + |
| 94 | + Define your LLMs and wrap them in `LangchainLLMWrapper` so that it can be used with ragas. |
| 95 | + |
| 96 | + ```python |
| 97 | + from langchain_openai import AzureChatOpenAI |
| 98 | + from langchain_openai import AzureOpenAIEmbeddings |
| 99 | + from ragas.llms import LangchainLLMWrapper |
| 100 | + from ragas.embeddings import LangchainEmbeddingsWrapper |
| 101 | + evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI( |
| 102 | + openai_api_version="2023-05-15", |
| 103 | + azure_endpoint=azure_configs["base_url"], |
| 104 | + azure_deployment=azure_configs["model_deployment"], |
| 105 | + model=azure_configs["model_name"], |
| 106 | + validate_base_url=False, |
| 107 | + )) |
| 108 | + |
| 109 | + # init the embeddings for answer_relevancy, answer_correctness and answer_similarity |
| 110 | + evaluator_embeddings = LangchainEmbeddingsWrapper(AzureOpenAIEmbeddings( |
| 111 | + openai_api_version="2023-05-15", |
| 112 | + azure_endpoint=azure_configs["base_url"], |
| 113 | + azure_deployment=azure_configs["embedding_deployment"], |
| 114 | + model=azure_configs["embedding_name"], |
| 115 | + )) |
| 116 | + ``` |
| 117 | + |
| 118 | + If you want more information on how to use other Azure services, please refer to the [langchain-azure](https://python.langchain.com/docs/integrations/chat/azure_chat_openai/) documentation. |
| 119 | + |
| 120 | + |
| 121 | +=== "Others" |
| 122 | + If you are using a different LLM provider and using Langchain to interact with it, you can wrap your LLM in `LangchainLLMWrapper` so that it can be used with ragas. |
| 123 | + |
| 124 | + ```python |
| 125 | + from ragas.llms import LangchainLLMWrapper |
| 126 | + evaluator_llm = LangchainLLMWrapper(your_llm_instance) |
| 127 | + ``` |
| 128 | + |
| 129 | + For a more detailed guide, checkout [the guide on customizing models](../../howtos/customizations/customize_models/). |
| 130 | + |
| 131 | + If you using LlamaIndex, you can use the `LlamaIndexLLMWrapper` to wrap your LLM so that it can be used with ragas. |
| 132 | + |
| 133 | + ```python |
| 134 | + from ragas.llms import LlamaIndexLLMWrapper |
| 135 | + evaluator_llm = LlamaIndexLLMWrapper(your_llm_instance) |
| 136 | + ``` |
| 137 | + |
| 138 | + For more information on how to use LlamaIndex, please refer to the [LlamaIndex Integration guide](../../howtos/integrations/_llamaindex/). |
| 139 | + |
| 140 | + If your still not able use Ragas with your favorite LLM provider, please let us know by by commenting on this [issue](https://github.com/explodinggradients/ragas/issues/1617) and we'll add support for it 🙂. |
0 commit comments