Skip to content

Commit ce2de92

Browse files
authored
Update aml-compute-target-deploy.md
remove FPGA (empty column)
1 parent ef477d2 commit ce2de92

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

includes/aml-compute-target-deploy.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,12 @@ ms.date: 10/21/2021
1414

1515
The compute target you use to host your model will affect the cost and availability of your deployed endpoint. Use this table to choose an appropriate compute target.
1616

17-
| Compute target | Used for | GPU support | FPGA support | Description |
18-
| ----- | ----- | ----- | ----- | ----- |
19-
| [Local web service](../articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md) | Testing/debugging |   |   | Use for limited testing and troubleshooting. Hardware acceleration depends on use of libraries in the local system.
20-
| [Azure Machine Learning Kubernetes](../articles/machine-learning/how-to-attach-kubernetes-anywhere.md) | Real-time inference <br/><br/> Batch inference | Yes | N/A | Run inferencing workloads on on-premises, cloud, and edge Kubernetes clusters. |
21-
| [Azure Container Instances](../articles/machine-learning/v1/how-to-deploy-azure-container-instance.md) | Real-time inference <br/><br/> Recommended for dev/test purposes only.| &nbsp; | &nbsp; | Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn't require you to manage a cluster. <br/><br/> Supported in the designer. |
22-
| [Azure Machine Learning compute clusters](../articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md) | Batch&nbsp;inference | [Yes](../articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md) (machine learning pipeline) | &nbsp; | Run batch scoring on serverless compute. Supports normal and low-priority VMs. No support for real-time inference.|
17+
| Compute target | Used for | GPU support | Description |
18+
| ----- | ----- | ----- | ----- |
19+
| [Local&nbsp;web&nbsp;service](../articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md) | Testing/debugging | &nbsp; | Use for limited testing and troubleshooting. Hardware acceleration depends on use of libraries in the local system.
20+
| [Azure Machine Learning Kubernetes](../articles/machine-learning/how-to-attach-kubernetes-anywhere.md) | Real-time inference <br/><br/> Batch inference | Yes | Run inferencing workloads on on-premises, cloud, and edge Kubernetes clusters. |
21+
| [Azure Container Instances](../articles/machine-learning/v1/how-to-deploy-azure-container-instance.md) | Real-time inference <br/><br/> Recommended for dev/test purposes only.| &nbsp; | Use for low-scale CPU-based workloads that require less than 48 GB of RAM. Doesn't require you to manage a cluster. <br/><br/> Supported in the designer. |
22+
| [Azure Machine Learning compute clusters](../articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md) | Batch&nbsp;inference | [Yes](../articles/machine-learning/tutorial-pipeline-batch-scoring-classification.md) (machine learning pipeline) | Run batch scoring on serverless compute. Supports normal and low-priority VMs. No support for real-time inference.|
2323

2424
> [!NOTE]
2525
> Although compute targets like local, and Azure Machine Learning compute clusters support GPU for training and experimentation, using GPU for inference _when deployed as a web service_ is supported only on Azure Machine Learning Kubernetes.
@@ -29,4 +29,4 @@ The compute target you use to host your model will affect the cost and availabil
2929
> When choosing a cluster SKU, first scale up and then scale out. Start with a machine that has 150% of the RAM your model requires, profile the result and find a machine that has the performance you need. Once you've learned that, increase the number of machines to fit your need for concurrent inference.
3030
3131
> [!NOTE]
32-
> * Container instances are suitable only for small models less than 1 GB in size.
32+
> * Container instances are suitable only for small models less than 1 GB in size.

0 commit comments

Comments
 (0)