Skip to content

Commit cb68667

Browse files
committed
restructure model docs in TOC
1 parent d7430d0 commit cb68667

File tree

1 file changed

+34
-27
lines changed

1 file changed

+34
-27
lines changed

articles/ai-studio/toc.yml

Lines changed: 34 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,40 @@ items:
4444
href: concepts/model-benchmarks.md
4545
- name: How to use model benchmarking
4646
href: how-to/benchmark-model-in-catalog.md
47+
- name: Model deployment in Azure AI Foundry
48+
items:
49+
- name: Deploying models in Azure AI Foundry
50+
href: concepts/deployments-overview.md
51+
- name: Serverless API
52+
items:
53+
- name: Deploy models as serverless API
54+
href: how-to/deploy-models-serverless.md
55+
- name: Consume serverless API models from a different project or hub
56+
href: how-to/deploy-models-serverless-connect.md
57+
- name: Model and region availability for Serverless API deployments
58+
href: how-to/deploy-models-serverless-availability.md
59+
- name: Managed compute
60+
items:
61+
- name: Deploy models via managed compute
62+
href: how-to/deploy-models-managed.md
63+
- name: Model inference
64+
items:
65+
- name: What is Azure AI model inference?
66+
href: ../ai-foundry/model-inference/overview.md?context=/azure/ai-studio/context/context
67+
- name: Upgrade from GitHub Models
68+
href: ../ai-foundry/model-inference/how-to/quickstart-github-models.md?context=/azure/ai-studio/context/context
69+
- name: Add and configure models
70+
href: ../ai-foundry/model-inference/how-to/create-model-deployments.md?context=/azure/ai-studio/context/context
71+
- name: Use the inference endpoint
72+
href: ../ai-foundry/model-inference/concepts/endpoints.md?context=/azure/ai-studio/context/context
73+
- name: Azure OpenAI Service
74+
items:
75+
- name: Deploy Azure OpenAI models
76+
href: how-to/deploy-models-openai.md
77+
- name: Quotas and limits
78+
href: ../ai-foundry/model-inference/quotas-limits.md?context=/azure/ai-studio/context/context
79+
- name: Troubleshoot deployments and monitoring
80+
href: how-to/troubleshoot-deploy-and-monitor.md
4781
- name: Featured models
4882
items:
4983
- name: AI21 Jamba models
@@ -318,33 +352,6 @@ items:
318352
displayName: code,sdk
319353
- name: Develop with Semantic Kernel
320354
href: how-to/develop/semantic-kernel.md
321-
- name: Model inference
322-
items:
323-
- name: What is Azure AI model inference?
324-
href: ../ai-foundry/model-inference/overview.md?context=/azure/ai-studio/context/context
325-
- name: Upgrade from GitHub Models
326-
href: ../ai-foundry/model-inference/how-to/quickstart-github-models.md?context=/azure/ai-studio/context/context
327-
- name: Add and configure models
328-
href: ../ai-foundry/model-inference/how-to/create-model-deployments.md?context=/azure/ai-studio/context/context
329-
- name: Use the inference endpoint
330-
href: ../ai-foundry/model-inference/concepts/endpoints.md?context=/azure/ai-studio/context/context
331-
- name: Deployments
332-
items:
333-
- name: Deploying models in Azure AI Foundry
334-
href: concepts/deployments-overview.md
335-
- name: Deploy models as serverless API
336-
href: how-to/deploy-models-serverless.md
337-
- name: Deploy models via managed compute
338-
href: how-to/deploy-models-managed.md
339-
- name: Consume serverless API models from a different project or hub
340-
href: how-to/deploy-models-serverless-connect.md
341-
displayName: maas, paygo, models-as-a-service
342-
- name: Model and region availability for Serverless API deployments
343-
href: how-to/deploy-models-serverless-availability.md
344-
- name: Quotas and limits
345-
href: ai-services/concepts/quotas-limits.md
346-
- name: Troubleshoot deployments and monitoring
347-
href: how-to/troubleshoot-deploy-and-monitor.md
348355
- name: Optimizations
349356
items:
350357
- name: Prompt engineering

0 commit comments

Comments
 (0)