Skip to content

Commit e8fd7df

Browse files
committed
remove more code and tip
1 parent 98cec1b commit e8fd7df

File tree

1 file changed

+2
-9
lines changed

1 file changed

+2
-9
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -64,10 +64,7 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
6464
> [!TIP]
6565
> If your model was deployed with Microsoft Entra ID support, you don't need a key.
6666
67-
In this scenario, we placed both the endpoint URL and key in the following environment variables.
68-
69-
> [!TIP]
70-
> The endpoint you copied might have extra text after /models. Delete that and stop at /models as shown here.
67+
In this scenario, set the endpoint URL and key as environment variables. (If the endpoint you copied includes additional text after `/models`, remove it so the URL ends at `/models` as shown below.)
7168
7269
```bash
7370
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
@@ -218,6 +215,7 @@ Then create the client:
218215
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=
219216
create_embed_model_client)]
220217
218+
221219
```python
222220
from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
223221
@@ -232,11 +230,6 @@ The following example shows a simple example using a vector store in memory:
232230
233231
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb?name=create_vector_store)]
234232
235-
```python
236-
from langchain_core.vectorstores import InMemoryVectorStore
237-
238-
vector_store = InMemoryVectorStore(embed_model)
239-
```
240233
241234
Let's add some documents:
242235

0 commit comments

Comments
 (0)