Skip to content

Commit e6be5bb

Browse files
authored
fix wrong type annotations and nonexistent types (#560)
- The code shown in the document cause warnings in the IDE. ![](https://github.com/explodinggradients/ragas/assets/62790279/826c10ae-a9d0-410a-b681-f07d44aeda17) - `ragas.llms.LangchainLLM` is no longer exists but `from ragas.llms import LangchainLLM` is still left in [document](https://github.com/explodinggradients/ragas/blob/c9ba2be93cb698ff7e691952b6f77378b853ef58/docs/howtos/customisations/gcp-vertexai.ipynb#L128) First time here, apology for any mistakes.
1 parent 5476079 commit e6be5bb

File tree

3 files changed

+4
-13
lines changed

3 files changed

+4
-13
lines changed

docs/concepts/prompt_adaptation.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,6 @@ Create a sample prompt using `Prompt` class.
3939
```{code-block} python
4040
4141
from langchain.chat_models import ChatOpenAI
42-
from ragas.llms import LangchainLLMWrapper
4342
from ragas.llms.prompt import Prompt
4443
4544
noun_extractor = Prompt(
@@ -55,7 +54,6 @@ examples=[{
5554
)
5655
5756
openai_model = ChatOpenAI(model_name="gpt-4")
58-
openai_model = LangchainLLMWrapper(llm=openai_model)
5957
```
6058

6159
Prompt adaption is done using the `.adapt` method:

docs/howtos/customisations/gcp-vertexai.ipynb

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -111,9 +111,7 @@
111111
"]\n",
112112
"```\n",
113113
"\n",
114-
"By default Ragas uses `ChatOpenAI` for evaluations, lets swap that out with `ChatVertextAI`. We also need to change the embeddings used for evaluations for `OpenAIEmbeddings` to `VertextAIEmbeddings` for metrices that need it, which in our case is `answer_relevancy`.\n",
115-
"\n",
116-
"Now in order to use the new `ChatVertextAI` llm instance with Ragas metrics, you have to create a new instance of `RagasLLM` using the `ragas.llms.LangchainLLM` wrapper. Its a simple wrapper around langchain that make Langchain LLM/Chat instances compatible with how Ragas metrics will use them."
114+
"By default Ragas uses `ChatOpenAI` for evaluations, lets swap that out with `ChatVertextAI`. We also need to change the embeddings used for evaluations for `OpenAIEmbeddings` to `VertextAIEmbeddings` for metrices that need it, which in our case is `answer_relevancy`."
117115
]
118116
},
119117
{
@@ -125,7 +123,6 @@
125123
"source": [
126124
"import google.auth\n",
127125
"from langchain.chat_models import ChatVertexAI\n",
128-
"from ragas.llms import LangchainLLM\n",
129126
"from langchain.embeddings import VertexAIEmbeddings\n",
130127
"\n",
131128
"\n",
@@ -136,11 +133,8 @@
136133
"# authenticate to GCP\n",
137134
"creds, _ = google.auth.default(quota_project_id=\"tmp-project-404003\")\n",
138135
"# create Langchain LLM and Embeddings\n",
139-
"chat = ChatVertexAI(credentials=creds)\n",
140-
"vertextai_embeddings = VertexAIEmbeddings(credentials=creds)\n",
141-
"\n",
142-
"# create a wrapper around it\n",
143-
"ragas_vertexai_llm = LangchainLLM(chat)"
136+
"ragas_vertexai_llm = ChatVertexAI(credentials=creds)\n",
137+
"vertextai_embeddings = VertexAIEmbeddings(credentials=creds)"
144138
]
145139
},
146140
{

docs/howtos/customisations/llms.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@
188188
"id": "c9ddf74a-9830-4e1a-a4dd-7e5ec17a71e4",
189189
"metadata": {},
190190
"source": [
191-
"Now lets create an Langchain llm instance and wrap it with `LangchainLLMWrapper` class. Because vLLM can run in OpenAI compatibilitiy mode, we can use the `ChatOpenAI` class as it is with small tweaks."
191+
"Now lets create an Langchain llm instance. Because vLLM can run in OpenAI compatibilitiy mode, we can use the `ChatOpenAI` class as it is with small tweaks."
192192
]
193193
},
194194
{
@@ -199,7 +199,6 @@
199199
"outputs": [],
200200
"source": [
201201
"from langchain_openai.chat_models import ChatOpenAI\n",
202-
"from ragas.llms.base import LangchainLLMWrapper\n",
203202
"\n",
204203
"inference_server_url = \"http://localhost:8080/v1\"\n",
205204
"\n",

0 commit comments

Comments
 (0)