Skip to content

Commit 998be06

Browse files
Merge pull request #3617 from santiagxf/santiagxf-patch-1
Reorder TOC deployment options
2 parents eab53b5 + 163b0a2 commit 998be06

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

articles/ai-foundry/toc.yml

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -52,20 +52,18 @@ items:
5252
items:
5353
- name: Deploying models in Azure AI Foundry
5454
href: concepts/deployments-overview.md
55-
- name: Serverless API
56-
items:
57-
- name: Deploy models as serverless API
58-
href: how-to/deploy-models-serverless.md
59-
- name: Consume serverless API models from a different project or hub
60-
href: how-to/deploy-models-serverless-connect.md
61-
- name: Model and region availability for Serverless API deployments
62-
href: how-to/deploy-models-serverless-availability.md
63-
- name: Content safety for models deployed with serverless APIs
64-
href: concepts/model-catalog-content-safety.md
65-
- name: Managed compute
55+
- name: Azure OpenAI Service
6656
items:
67-
- name: Deploy models via managed compute
68-
href: how-to/deploy-models-managed.md
57+
- name: Azure OpenAI in Azure AI Foundry
58+
href: azure-openai-in-ai-foundry.md
59+
- name: Use Azure OpenAI Service in Azure AI Foundry portal
60+
href: ai-services/how-to/connect-azure-openai.md
61+
- name: Deploy Azure OpenAI models
62+
href: how-to/deploy-models-openai.md
63+
- name: Azure OpenAI Service quotas and limits
64+
href: ../ai-services/openai/quotas-limits.md?context=/azure/ai-foundry/context/context
65+
- name: Troubleshoot deployments and monitoring
66+
href: how-to/troubleshoot-deploy-and-monitor.md
6967
- name: Azure AI model inference
7068
items:
7169
- name: What is Azure AI model inference?
@@ -82,18 +80,20 @@ items:
8280
href: ../ai-foundry/model-inference/how-to/inference.md?context=/azure/ai-foundry/context/context
8381
- name: Azure AI model inference quotas and limits
8482
href: ../ai-foundry/model-inference/quotas-limits.md?context=/azure/ai-foundry/context/context
85-
- name: Azure OpenAI Service
83+
- name: Serverless API
8684
items:
87-
- name: Azure OpenAI in Azure AI Foundry
88-
href: azure-openai-in-ai-foundry.md
89-
- name: Use Azure OpenAI Service in Azure AI Foundry portal
90-
href: ai-services/how-to/connect-azure-openai.md
91-
- name: Deploy Azure OpenAI models
92-
href: how-to/deploy-models-openai.md
93-
- name: Azure OpenAI Service quotas and limits
94-
href: ../ai-services/openai/quotas-limits.md?context=/azure/ai-foundry/context/context
95-
- name: Troubleshoot deployments and monitoring
96-
href: how-to/troubleshoot-deploy-and-monitor.md
85+
- name: Deploy models as serverless API
86+
href: how-to/deploy-models-serverless.md
87+
- name: Consume serverless API models from a different project or hub
88+
href: how-to/deploy-models-serverless-connect.md
89+
- name: Model and region availability for Serverless API deployments
90+
href: how-to/deploy-models-serverless-availability.md
91+
- name: Content safety for models deployed with serverless APIs
92+
href: concepts/model-catalog-content-safety.md
93+
- name: Managed compute
94+
items:
95+
- name: Deploy models via managed compute
96+
href: how-to/deploy-models-managed.md
9797
- name: Work with models from the model catalog
9898
items:
9999
- name: Featured models in the model catalog

0 commit comments

Comments
 (0)