Skip to content

Commit 655e9b3

Browse files
committed
removing fpga article per PM conversation
1 parent 45bca01 commit 655e9b3

9 files changed

+5
-378
lines changed

articles/machine-learning/.openpublishing.redirection.machine-learning.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@
172172
},
173173
{
174174
"source_path_from_root": "/articles/machine-learning/v-fake/how-to-deploy-fpga-web-service.md",
175-
"redirect_url": "/azure/machine-learning/how-to-deploy-fpga-web-service",
175+
"redirect_url": "/azure/machine-learning/how-to-deploy-online-endpoints",
176176
"redirect_document_id": false
177177
},
178178
{

articles/machine-learning/reference-machine-learning-cloud-parity.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,6 @@ The information in the rest of this document provides information on what featur
8484
| **Machine learning lifecycle** | | | |
8585
| [Model profiling (SDK/CLI v1)](v1/how-to-deploy-profile-model.md) | GA | YES | PARTIAL |
8686
| [The Azure Machine Learning CLI v1](v1/reference-azure-machine-learning-cli.md) | GA | YES | YES |
87-
| [FPGA-based Hardware Accelerated Models (SDK/CLI v1)](./v1/how-to-deploy-fpga-web-service.md) | GA | NO | NO |
8887
| [Visual Studio Code integration](how-to-setup-vs-code.md) | Public Preview | NO | NO |
8988
| [Event Grid integration](how-to-use-event-grid.md) | Public Preview | NO | NO |
9089
| [Integrate Azure Stream Analytics with Azure Machine Learning](../stream-analytics/machine-learning-udf.md) | Public Preview | NO | NO |

articles/machine-learning/toc.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1036,8 +1036,6 @@
10361036
href: ./v1/how-to-deploy-inferencing-gpus.md
10371037
- name: Azure AI Search
10381038
href: ./v1/how-to-deploy-model-cognitive-search.md
1039-
- name: FPGA inference
1040-
href: ./v1/how-to-deploy-fpga-web-service.md
10411039
- name: Hyperparameter tuning a model (v1)
10421040
displayName: azurecli, AzureML
10431041
href: ./v1/how-to-tune-hyperparameters.md

articles/machine-learning/v1/concept-azure-machine-learning-architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ An endpoint is an instantiation of your model into a web service that can be hos
204204

205205
#### Web service endpoint
206206

207-
When deploying a model as a web service, the endpoint can be deployed on Azure Container Instances, Azure Kubernetes Service, or FPGAs. You create the service from your model, script, and associated files. These are placed into a base container image, which contains the execution environment for the model. The image has a load-balanced, HTTP endpoint that receives scoring requests that are sent to the web service.
207+
When deploying a model as a web service, the endpoint can be deployed on Azure Container Instances or Azure Kubernetes Service. You create the service from your model, script, and associated files. These are placed into a base container image, which contains the execution environment for the model. The image has a load-balanced, HTTP endpoint that receives scoring requests that are sent to the web service.
208208

209209
You can enable Application Insights telemetry or model telemetry to monitor your web service. The telemetry data is accessible only to you. It's stored in your Application Insights and storage account instances. If you've enabled automatic scaling, Azure automatically scales your deployment.
210210

articles/machine-learning/v1/concept-model-management-and-deployment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ For more information on ONNX with Azure Machine Learning, see the [Create and ac
9898

9999
### Use models
100100

101-
Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, GPU, or field-programmable gate arrays (FPGA) for inferencing. You can also use models from Power BI.
101+
Trained machine learning models are deployed as web services in the cloud or locally. Deployments use CPU, or GPU for inferencing. You can also use models from Power BI.
102102

103103
When using a model as a web service, you provide the following items:
104104

articles/machine-learning/v1/how-to-consume-web-service.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ ms.custom: UpdateFrequency5, devx-track-python, devx-track-csharp, cliv1, sdkv1,
2121

2222
Deploying an Azure Machine Learning model as a web service creates a REST API endpoint. You can send data to this endpoint and receive the prediction returned by the model. In this document, learn how to create clients for the web service by using C#, Go, Java, and Python.
2323

24-
You create a web service when you deploy a model to your local environment, Azure Container Instances, Azure Kubernetes Service, or field-programmable gate arrays (FPGA). You retrieve the URI used to access the web service by using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). If authentication is enabled, you can also use the SDK to get the authentication keys or tokens.
24+
You create a web service when you deploy a model to your local environment, Azure Container Instances, or Azure Kubernetes Service. You retrieve the URI used to access the web service by using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). If authentication is enabled, you can also use the SDK to get the authentication keys or tokens.
2525

2626
The general workflow for creating a client that uses a machine learning web service is:
2727

0 commit comments

Comments
 (0)