Skip to content

Commit 094cc1b

Browse files
committed
feat: ds
1 parent f428e7b commit 094cc1b

File tree

3 files changed

+1763
-0
lines changed

3 files changed

+1763
-0
lines changed

articles/ai-foundry/model-inference/overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,9 @@ recommendations: false
1818

1919
Azure AI model inference provides access to the most powerful models available in the Azure AI model catalog. The models come from key model providers in the industry, including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs. These models can be integrated with software solutions to deliver a wide range of tasks that include content generation, summarization, image understanding, semantic search, and code generation.
2020

21+
> [!TIP]
22+
> DeepSeek R1 is available for deployment as [Serverless API endpoint](../../ai-studio/how-to/deploy-models-serverless.md).
23+
2124
Azure AI model inference provides a way to **consume models as APIs without hosting them on your infrastructure**. Models are hosted in a Microsoft-managed infrastructure, which enables API-based access to the model provider's model. API-based access can dramatically reduce the cost of accessing a model and simplify the provisioning experience.
2225

2326
Azure AI model inference is part of Azure AI Services, and users can access the service through [REST APIs](../../ai-studio/reference/reference-model-inference-api.md), [SDKs in several languages](supported-languages.md) such as Python, C#, JavaScript, and Java. You can also use the Azure AI model inference from [Azure AI Foundry by configuring a connection](how-to/configure-project-connection.md).

0 commit comments

Comments
 (0)