Skip to content

Commit f95747d

Browse files
authored
Merge pull request #207248 from Blackmist/v1v2fix
moving to v1
2 parents d7862dd + 29b9c17 commit f95747d

File tree

7 files changed

+41
-49
lines changed

7 files changed

+41
-49
lines changed

articles/aks/gpu-cluster.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ There are two options for adding the NVIDIA device plugin:
4343
4444
### Update your cluster to use the AKS GPU image (preview)
4545

46-
AKS provides is providing a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
46+
AKS provides a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
4747

4848
Register the `GPUDedicatedVHDPreview` feature:
4949

@@ -93,7 +93,7 @@ az aks nodepool add \
9393
--max-count 3
9494
```
9595

96-
The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the nodes in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
96+
The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the node in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
9797

9898
> [!NOTE]
9999
> A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
@@ -408,8 +408,8 @@ For more information about running machine learning (ML) workloads on Kubernetes
408408

409409
For information on using Azure Kubernetes Service with Azure Machine Learning, see the following articles:
410410

411-
* [Deploy a model to Azure Kubernetes Service][azureml-aks].
412-
* [Deploy a deep learning model for inference with GPU][azureml-gpu].
411+
* [Configure a Kubernetes cluster for ML model training or deployment][azureml-aks].
412+
* [Deploy a model with an online endpoint][azureml-deploy].
413413
* [High-performance serving with Triton Inference Server][azureml-triton].
414414

415415
<!-- LINKS - external -->
@@ -434,7 +434,7 @@ For information on using Azure Kubernetes Service with Azure Machine Learning, s
434434
[aks-spark]: spark-job.md
435435
[gpu-skus]: ../virtual-machines/sizes-gpu.md
436436
[install-azure-cli]: /cli/azure/install-azure-cli
437-
[azureml-aks]: ../machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
438-
[azureml-gpu]: ../machine-learning/how-to-deploy-inferencing-gpus.md
437+
[azureml-aks]: ../machine-learning/how-to-attach-kubernetes-anywhere.md
438+
[azureml-deploy]: ../machine-learning/how-to-deploy-managed-online-endpoints.md
439439
[azureml-triton]: ../machine-learning/how-to-deploy-with-triton.md
440440
[aks-container-insights]: monitor-aks.md#container-insights

articles/machine-learning/.openpublishing.redirection.machine-learning.json

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1421,13 +1421,13 @@
14211421
"redirect_document_id": false
14221422
},
14231423
{
1424-
"source_path_from_root": "/articles/machine-learning/service/how-to-deploy-inferencing-gpus.md",
1425-
"redirect_url": "/azure/machine-learning/how-to-deploy-inferencing-gpus",
1426-
"redirect_document_id": true
1424+
"source_path_from_root": "/articles/machine-learning/how-to-deploy-inferencing-gpus.md",
1425+
"redirect_url": "/azure/machine-learning/how-to-deploy-managed-online-endpoints",
1426+
"redirect_document_id": false
14271427
},
14281428
{
1429-
"source_path_from_root": "/articles/machine-learning/service/how-to-deploy-fpga-web-service.md",
1430-
"redirect_url": "/azure/machine-learning/how-to-deploy-fpga-web-service",
1429+
"source_path_from_root": "/articles/machine-learning/how-to-deploy-fpga-web-service.md",
1430+
"redirect_url": "/azure/machine-learning/v1/how-to-deploy-fpga-web-service",
14311431
"redirect_document_id": true
14321432
},
14331433
{
@@ -1655,16 +1655,6 @@
16551655
"redirect_url": "/azure/machine-learning/concept-automated-ml",
16561656
"redirect_document_id": true
16571657
},
1658-
{
1659-
"source_path_from_root": "/articles/machine-learning/service/concept-accelerate-with-fpgas.md",
1660-
"redirect_url": "/azure/machine-learning/how-to-deploy-fpga-web-service",
1661-
"redirect_document_id": false
1662-
},
1663-
{
1664-
"source_path_from_root": "/articles/machine-learning/service/concept-accelerate-inferencing-with-gpus.md",
1665-
"redirect_url": "/azure/machine-learning/how-to-deploy-inferencing-gpus",
1666-
"redirect_document_id": false
1667-
},
16681658
{
16691659
"source_path_from_root": "/articles/machine-learning/service/azure-machine-learning-release-notes.md",
16701660
"redirect_url": "/azure/machine-learning/azure-machine-learning-release-notes",

articles/machine-learning/reference-machine-learning-cloud-parity.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ In the list of global Azure regions, there are several regions that serve specif
2323
* Azure Government regions **US-Arizona** and **US-Virginia**.
2424
* Azure China 21Vianet region **China-East-2**.
2525

26-
Azure Machine Learning is still in development in Airgap Regions.
26+
Azure Machine Learning is still in development in air-gap Regions.
2727

2828
The information in the rest of this document provides information on what features of Azure Machine Learning are available in these regions, along with region-specific information on using these features.
2929
## Azure Government
@@ -76,7 +76,7 @@ The information in the rest of this document provides information on what featur
7676
| **Machine learning lifecycle** | | | |
7777
| [Model profiling](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
7878
| [The Azure ML CLI 1.0](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
79-
| [FPGA-based Hardware Accelerated Models](how-to-deploy-fpga-web-service.md) | GA | NO | NO |
79+
| [FPGA-based Hardware Accelerated Models](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO |
8080
| [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO |
8181
| [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO |
8282
| [Integrate Azure Stream Analytics with Azure Machine Learning](../stream-analytics/machine-learning-udf.md) | Public Preview | NO | NO |
@@ -99,7 +99,7 @@ The information in the rest of this document provides information on what featur
9999
| **Inference** | | | |
100100
| Managed online endpoints | GA | YES | YES |
101101
| [Batch inferencing](tutorial-pipeline-batch-scoring-classification.md) | GA | YES | YES |
102-
| [Azure Stack Edge with FPGA](how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
102+
| [Azure Stack Edge with FPGA](./v1/how-to-deploy-fpga-web-service.md#deploy-to-a-local-edge-server) | Public Preview | NO | NO |
103103
| **Other** | | | |
104104
| [Open Datasets](../open-datasets/samples.md) | Public Preview | YES | YES |
105105
| [Custom Cognitive Search](how-to-deploy-model-cognitive-search.md) | Public Preview | YES | YES |

articles/machine-learning/how-to-deploy-fpga-web-service.md renamed to articles/machine-learning/v1/how-to-deploy-fpga-web-service.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ ms.custom: contperf-fy21q2, devx-track-python, deploy, sdkv1, event-tier1-build-
1515

1616
# Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning
1717

18-
[!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
18+
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
1919

20-
In this article, you learn about FPGAs and how to deploy your ML models to an Azure FPGA using the [hardware-accelerated models Python package](/python/api/azureml-accel-models/azureml.accel) from [Azure Machine Learning](overview-what-is-azure-machine-learning.md).
20+
In this article, you learn about FPGAs and how to deploy your ML models to an Azure FPGA using the [hardware-accelerated models Python package](/python/api/azureml-accel-models/azureml.accel) from [Azure Machine Learning](../overview-what-is-azure-machine-learning.md).
2121

2222
## What are FPGAs?
2323

@@ -32,7 +32,7 @@ You can reconfigure FPGAs for different types of machine learning models. This f
3232
|Processor| Abbreviation |Description|
3333
|---|:-------:|------|
3434
|Application-specific integrated circuits|ASICs|Custom circuits, such as Google's Tensor Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.|
35-
|Field-programmable gate arrays|FPGAs|FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic.|
35+
|Field-programmable gate arrays|FPGAs|FPGAs, such as those available on Azure, provide performance close to ASICs. They're also flexible and reconfigurable over time, to implement new logic.|
3636
|Graphics processing units|GPUs|A popular choice for AI computations. GPUs offer parallel processing capabilities, making it faster at image rendering than CPUs.|
3737
|Central processing units|CPUs|General-purpose processors, the performance of which isn't ideal for graphics and video processing.|
3838

@@ -50,7 +50,7 @@ Azure FPGAs are integrated with Azure Machine Learning. Azure can parallelize pr
5050

5151
To optimize latency and throughput, your client sending data to the FPGA model should be in one of the regions above (the one you deployed the model to).
5252

53-
The **PBS Family of Azure VMs** contains Intel Arria 10 FPGAs. It will show as "Standard PBS Family vCPUs" when you check your Azure quota allocation. The PB6 VM has six vCPUs and one FPGA. PB6 VM is automatically provisioned by Azure Machine Learning during model deployment to an FPGA. It is only used with Azure ML, and it cannot run arbitrary bitstreams. For example, you will not be able to flash the FPGA with bitstreams to do encryption, encoding, etc.
53+
The **PBS Family of Azure VMs** contains Intel Arria 10 FPGAs. It will show as "Standard PBS Family vCPUs" when you check your Azure quota allocation. The PB6 VM has six vCPUs and one FPGA. PB6 VM is automatically provisioned by Azure Machine Learning during model deployment to an FPGA. It's only used with Azure ML, and it can't run arbitrary bitstreams. For example, you won't be able to flash the FPGA with bitstreams to do encryption, encoding, etc.
5454

5555
## Deploy models on FPGAs
5656

@@ -60,9 +60,9 @@ In this example, you create a TensorFlow graph to preprocess the input image, ma
6060

6161
### Prerequisites
6262

63-
- An Azure subscription. If you do not have one, create a [pay-as-you-go](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) account (free Azure accounts are not eligible for FPGA quota).
63+
- An Azure subscription. If you don't have one, create a [pay-as-you-go](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) account (free Azure accounts aren't eligible for FPGA quota).
6464

65-
- An Azure Machine Learning workspace and the Azure Machine Learning SDK for Python installed, as described in [Create a workspace](how-to-manage-workspace.md).
65+
- An Azure Machine Learning workspace and the Azure Machine Learning SDK for Python installed, as described in [Create a workspace](../how-to-manage-workspace.md).
6666

6767
- The hardware-accelerated models package: `pip install --upgrade azureml-accel-models[cpu]`
6868

@@ -151,7 +151,7 @@ Begin by using the [Azure Machine Learning SDK for Python](/python/api/overview/
151151
print(output_tensors)
152152
```
153153

154-
The following models are available listed with their classifier output tensors for inference if you used the default classifier.
154+
The following models are listed with their classifier output tensors for inference if you used the default classifier.
155155

156156
+ Resnet50, QuantizedResnet50
157157
```python
@@ -178,7 +178,7 @@ Begin by using the [Azure Machine Learning SDK for Python](/python/api/overview/
178178

179179
Before you can deploy to FPGAs, convert the model to the [ONNX](https://onnx.ai/) format.
180180

181-
1. [Register](concept-model-management-and-deployment.md) the model by using the SDK with the ZIP file in Azure Blob storage. Adding tags and other metadata about the model helps you keep track of your trained models.
181+
1. [Register](../concept-model-management-and-deployment.md) the model by using the SDK with the ZIP file in Azure Blob storage. Adding tags and other metadata about the model helps you keep track of your trained models.
182182

183183
```python
184184
from azureml.core.model import Model
@@ -221,7 +221,7 @@ Before you can deploy to FPGAs, convert the model to the [ONNX](https://onnx.ai/
221221

222222
### Containerize and deploy the model
223223

224-
Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Data Box Edge](../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image.
224+
Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image.
225225

226226
```python
227227
from azureml.core.image import Image
@@ -295,7 +295,7 @@ Next, create a Docker image from the converted model and all dependencies. This
295295

296296
#### Deploy to a local edge server
297297

298-
All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
298+
All [Azure Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
299299

300300
### Consume the deployed model
301301

@@ -358,7 +358,7 @@ converted_model.delete()
358358

359359
## Next steps
360360

361-
+ Learn how to [secure your web services](./v1/how-to-secure-web-service.md) document.
361+
+ Learn how to [secure your web services](how-to-secure-web-service.md) document.
362362

363363
+ Learn about FPGA and [Azure Machine Learning pricing and costs](https://azure.microsoft.com/pricing/details/machine-learning/).
364364

0 commit comments

Comments
 (0)