Skip to content

Commit 21d6f1f

Browse files
docs: fix lets typos in multiple files (#31481)
Fix typo
1 parent f97e182 commit 21d6f1f

File tree

7 files changed

+11
-11
lines changed

7 files changed

+11
-11
lines changed

cookbook/rag_with_quantized_embeddings.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
"id": "f5ccda4e-7af5-4355-b9c4-25547edf33f9",
5454
"metadata": {},
5555
"source": [
56-
"Lets first load up this paper, and split into text chunks of size 1000."
56+
"Let's first load up this paper, and split into text chunks of size 1000."
5757
]
5858
},
5959
{
@@ -241,7 +241,7 @@
241241
"id": "360b2837-8024-47e0-a4ba-592505a9a5c8",
242242
"metadata": {},
243243
"source": [
244-
"With our embedder in place, lets define our retriever:"
244+
"With our embedder in place, let's define our retriever:"
245245
]
246246
},
247247
{
@@ -312,7 +312,7 @@
312312
"id": "d84ea8f4-a5de-4d76-b44d-85e56583f489",
313313
"metadata": {},
314314
"source": [
315-
"Lets write our documents into our new store. This will use our embedder on each document."
315+
"Let's write our documents into our new store. This will use our embedder on each document."
316316
]
317317
},
318318
{
@@ -339,7 +339,7 @@
339339
"id": "580bc212-8ecd-4d28-8656-b96fcd0d7eb6",
340340
"metadata": {},
341341
"source": [
342-
"Great! Our retriever is good to go. Lets load up an LLM, that will reason over the retrieved documents:"
342+
"Great! Our retriever is good to go. Let's load up an LLM, that will reason over the retrieved documents:"
343343
]
344344
},
345345
{
@@ -430,7 +430,7 @@
430430
"id": "3bc53602-86d6-420f-91b1-fc2effa7e986",
431431
"metadata": {},
432432
"source": [
433-
"Excellent! lets ask it a question.\n",
433+
"Excellent! Let's ask it a question.\n",
434434
"We will also use a verbose and debug, to check which documents were used by the model to produce the answer."
435435
]
436436
},

docs/docs/how_to/llm_caching.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
"from langchain_core.globals import set_llm_cache\n",
4141
"from langchain_openai import OpenAI\n",
4242
"\n",
43-
"# To make the caching really obvious, lets use a slower and older model.\n",
43+
"# To make the caching really obvious, let's use a slower and older model.\n",
4444
"# Caching supports newer chat models as well.\n",
4545
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
4646
]

docs/docs/integrations/llm_caching.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"from langchain.globals import set_llm_cache\n",
5252
"from langchain_openai import OpenAI\n",
5353
"\n",
54-
"# To make the caching really obvious, lets use a slower and older model.\n",
54+
"# To make the caching really obvious, let's use a slower and older model.\n",
5555
"# Caching supports newer chat models as well.\n",
5656
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)"
5757
]

docs/docs/integrations/llms/lmformatenforcer_experimental.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -211,7 +211,7 @@
211211
"id": "b6e7b9cf-8ce5-4f87-b4bf-100321ad2dd1",
212212
"metadata": {},
213213
"source": [
214-
"***The result is usually closer to the JSON object of the schema definition, rather than a json object conforming to the schema. Lets try to enforce proper output.***"
214+
"***The result is usually closer to the JSON object of the schema definition, rather than a json object conforming to the schema. Let's try to enforce proper output.***"
215215
]
216216
},
217217
{

docs/docs/integrations/providers/portkey/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ The power of the AI gateway comes when you're able to use the above code snippet
4949

5050
Let's modify the code above to make a call to Anthropic's `claude-3-opus-20240229` model.
5151

52-
Portkey supports **[Virtual Keys](https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/virtual-keys)** which are an easy way to store and manage API keys in a secure vault. Lets try using a Virtual Key to make LLM calls. You can navigate to the Virtual Keys tab in Portkey and create a new key for Anthropic.
52+
Portkey supports **[Virtual Keys](https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/virtual-keys)** which are an easy way to store and manage API keys in a secure vault. Let's try using a Virtual Key to make LLM calls. You can navigate to the Virtual Keys tab in Portkey and create a new key for Anthropic.
5353

5454
The `virtual_key` parameter sets the authentication and provider for the AI provider being used. In our case we're using the Anthropic Virtual key.
5555

docs/docs/integrations/text_embedding/optimum_intel.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@
6161
"id": "34318164-7a6f-47b6-8690-3b1d71e1fcfc",
6262
"metadata": {},
6363
"source": [
64-
"Lets ask a question, and compare to 2 documents. The first contains the answer to the question, and the second one does not. \n",
64+
"Let's ask a question, and compare to 2 documents. The first contains the answer to the question, and the second one does not. \n",
6565
"\n",
6666
"We can check better suits our query."
6767
]

docs/docs/integrations/text_embedding/premai.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
"source": [
2626
"## PremEmbeddings\n",
2727
"\n",
28-
"In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key. "
28+
"In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Let's start by importing our modules and setting our API Key. "
2929
]
3030
},
3131
{

0 commit comments

Comments
 (0)