Skip to content

Commit 4fc1f45

Browse files
authored
Remove image from NVIDIA inference microservice guide
1 parent 67f249e commit 4fc1f45

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

articles/ai-foundry/how-to/deploy-nvidia-inference-microservice.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,6 @@ Get improved TCO (total cost of ownership) and performance with NVIDIA NIMs offe
5959
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and go to the **Home** page.
6060
2. Select **Model catalog** from the left sidebar.
6161
3. In the filters section, select **Collections** and select **NVIDIA**.
62-
63-
:::image type="content" source="../media/how-to/deploy-nvidia-inference-microservice/nvidia-collections.png" alt-text="A screenshot showing how to filter by NVIDIA collections models in the catalog." lightbox="../media/how-to/deploy-nvidia-inference-microservice/nvidia-collections.png":::
64-
6562
4. Select the NVIDIA NIM of your choice. In this article, we are using **Llama-3.3-70B-Instruct-NIM-microservice** as an example.
6663
5. Select **Deploy**.
6764
6. Select one of the NVIDIA GPU based VM SKUs supported for the NIM, based on your intended workload. You need to have quota in your Azure subscription.

0 commit comments

Comments
 (0)