Skip to content

Commit 7280b85

Browse files
committed
removing info
1 parent da42215 commit 7280b85

File tree

4 files changed

+8
-37
lines changed

4 files changed

+8
-37
lines changed

articles/ai-services/agents/concepts/model-region-support.md

Lines changed: 6 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: aahill
77
ms.author: aahi
88
ms.service: azure-ai-agent-service
99
ms.topic: conceptual
10-
ms.date: 01/07/2025
10+
ms.date: 01/27/2025
1111
ms.custom: azure-ai-agents
1212
---
1313

@@ -28,48 +28,19 @@ Azure AI Agent Service supports the same models as the chat completions API in A
2828
| westus |||| - || - || - | - ||| - |
2929

3030

31-
## More models
31+
## Non-microsoft models
3232

3333
The Azure AI Agent Service also supports the following models from the Azure AI Foundry model catalog.
3434

3535
* Llama 3.1-70B-instruct
3636
* Mistral-large-2407
3737
* Cohere command R+
3838

39-
To use these models, you can use Azure AI Foundry portal to make a deployment, and then reference it in your agent.
40-
41-
1. Go to the [Azure AI Foundry portal](https://ai.azure.com/) and select **Model catalog** in the left navigation menu, and scroll down to **Meta-Llama-3-70B-Instruct**. You can also find and use one of the models listed previously.
42-
43-
1. Select **Deploy**.
44-
45-
1. In the Deployment options screen that appears, select **Serverless API** with Azure AI Content Safety.
46-
47-
:::image type="content" source="../media/llama/llama-deployment.png" alt-text="An image of the llama model project selection screen.":::
48-
49-
1. Select your project and then select **Subscribe and deploy**.
50-
51-
:::image type="content" source="../media/llama/llama-deployment-2.png" alt-text="An image of the llama model deployment screen.":::
52-
53-
1. Add the serverless connection to your hub/project. The deployment name you choose is the one that you reference in your code.
54-
55-
1. When calling agent creation API, set the `models` parameter to your deployment name. For example:
56-
57-
# [Python](#tab/python)
58-
59-
```python
60-
agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent" )
61-
```
62-
63-
# [C#](#tab/csharp)
64-
65-
```csharp
66-
Response<Agent> agentResponse = await client.CreateAgentAsync(
67-
model: "llama-3",
68-
name: "My agent",
69-
instructions: "You are a helpful agent"
70-
```
71-
---
39+
To use these models, you can use Azure AI Foundry portal to make a deployment, and then reference the deployment name in your agent. For example:
7240

41+
```python
42+
agent = project_client.agents.create_agent( model="llama-3", name="my-agent", instructions="You are a helpful agent" )
43+
```
7344

7445
## Next steps
7546

articles/ai-services/agents/how-to/tools/openapi-spec.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ An example of the audience would be ```https://cognitiveservices.azure.com/```.
187187
# Create agent with OpenAPI tool and process assistant run
188188
with project_client:
189189
agent = project_client.agents.create_agent(
190-
model="gpt-4o-mini",
190+
model="gpt-4o",
191191
name="my-assistant",
192192
instructions="You are a helpful assistant",
193193
tools=openapi.definitions
@@ -201,7 +201,7 @@ with project_client:
201201
# [C#](#tab/csharp)
202202
```csharp
203203
Response<Agent> agentResponse = await client.CreateAgentAsync(
204-
model: "gpt-4",
204+
model: "gpt-4o",
205205
name: "azure-function-agent-foo",
206206
instructions: "You are a helpful assistant.",
207207
tools: new List<ToolDefinition> { openapiTool }
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)