Skip to content

Commit 3f9ca02

Browse files
Merge pull request #222075 from sdgilley/sdg-r-support
New R support articles
2 parents 7ded315 + bfbd9e1 commit 3f9ca02

File tree

10 files changed

+1086
-0
lines changed

10 files changed

+1086
-0
lines changed
Lines changed: 364 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,364 @@
1+
---
2+
title: Deploy a registered R model to an online (real time) endpoint
3+
titleSuffix: Azure Machine Learning
4+
description: 'Learn how to deploy your R model to an online (real-time) managed endpoint'
5+
ms.service: machine-learning
6+
ms.date: 01/12/2023
7+
ms.topic: how-to
8+
author: wahalulu
9+
ms.author: mavaisma
10+
ms.reviewer: sgilley
11+
ms.devlang: r
12+
---
13+
14+
# How to deploy a registered R model to an online (real time) endpoint
15+
16+
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
17+
18+
In this article, you'll learn how to deploy an R model to a managed endpoint (Web API) so that your application can score new data against the model in near real-time.
19+
20+
## Prerequisites
21+
22+
- An [Azure Machine Learning workspace](quickstart-create-resources.md).
23+
- Azure [CLI and ml extension installed](how-to-configure-cli.md). Or use a [compute instance in your workspace](quickstart-create-resources.md), which has the CLI pre-installed.
24+
- At least one custom environment associated with your workspace. Create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment if you don't have one.
25+
- An understanding of the [R `plumber` package](https://www.rplumber.io/index.html)
26+
- A model that you've trained and [packaged with `crate`](how-to-r-modify-script-for-production.md#crate-your-models-with-the-carrier-package), and [registered into your workspace](how-to-r-train-model.md#register-model)
27+
28+
## Create a folder with this structure
29+
30+
Create this folder structure for your project:
31+
32+
```
33+
📂 r-deploy-azureml
34+
├─📂 docker-context
35+
│ ├─ Dockerfile
36+
│ ├─ start_plumber.R
37+
├─📂 src
38+
│ ├─ plumber.R
39+
├─ deployment.yml
40+
├─ endpoint.yml
41+
```
42+
43+
The contents of each of these files is shown and explained in this article.
44+
45+
46+
### Dockerfile
47+
48+
This is the file that defines the container environment. You'll also define the installation of any additional R packages here.
49+
50+
A sample **Dockerfile** will look like this:
51+
52+
```dockerfile
53+
# REQUIRED: Begin with the latest R container with plumber
54+
FROM rstudio/plumber:latest
55+
56+
# REQUIRED: Install carrier package to be able to use the crated model (whether from a training job
57+
# or uploaded)
58+
RUN R -e "install.packages('carrier', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
59+
60+
# OPTIONAL: Install any additional R packages you may need for your model crate to run
61+
RUN R -e "install.packages('<PACKAGE-NAME>', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
62+
RUN R -e "install.packages('<PACKAGE-NAME>', dependencies = TRUE, repos = 'https://cloud.r-project.org/')"
63+
64+
# REQUIRED
65+
ENTRYPOINT []
66+
67+
COPY ./start_plumber.R /tmp/start_plumber.R
68+
69+
CMD ["Rscript", "/tmp/start_plumber.R"]
70+
```
71+
72+
Modify the file to add the packages you need for your scoring script.
73+
74+
### plumber.R
75+
76+
> [!IMPORTANT]
77+
> This section shows how to structure the **plumber.R** script. For detailed information about the `plumber` package, see [`plumber` documentation](https://www.rplumber.io/index.html) .
78+
79+
The file **plumber.R** is the R script where you'll define the function for scoring. This script also performs tasks that are necessary to make your endpoint work. The script:
80+
81+
- Gets the path where the model is mounted from the `AZUREML_MODEL_DIR` environment variable in the container.
82+
- Loads a model object created with the `crate` function from the `carrier` package, which was saved as **crate.bin** when it was packaged.
83+
- _Unserializes_ the model object
84+
- Defines the scoring function
85+
86+
> [!TIP]
87+
> Make sure that whatever your scoring function produces can be converted back to JSON. Some R objects are not easily converted.
88+
89+
```r
90+
# plumber.R
91+
# This script will be deployed to a managed endpoint to do the model scoring
92+
93+
# REQUIRED
94+
# When you deploy a model as an online endpoint, AzureML mounts your model
95+
# to your endpoint. Model mounting enables you to deploy new versions of the model without
96+
# having to create a new Docker image.
97+
98+
model_dir <- Sys.getenv("AZUREML_MODEL_DIR")
99+
100+
# REQUIRED
101+
# This reads the serialized model with its respecive predict/score method you
102+
# registered. The loaded load_model object is a raw binary object.
103+
load_model <- readRDS(paste0(model_dir, "/models/crate.bin"))
104+
105+
# REQUIRED
106+
# You have to unserialize the load_model object to make it its function
107+
scoring_function <- unserialize(load_model)
108+
109+
# REQUIRED
110+
# << Readiness route vs. liveness route >>
111+
# An HTTP server defines paths for both liveness and readiness. A liveness route is used to
112+
# check whether the server is running. A readiness route is used to check whether the
113+
# server's ready to do work. In machine learning inference, a server could respond 200 OK
114+
# to a liveness request before loading a model. The server could respond 200 OK to a
115+
# readiness request only after the model has been loaded into memory.
116+
117+
#* Liveness check
118+
#* @get /live
119+
function() {
120+
"alive"
121+
}
122+
123+
#* Readiness check
124+
#* @get /ready
125+
function() {
126+
"ready"
127+
}
128+
129+
# << The scoring function >>
130+
# This is the function that is deployed as a web API that will score the model
131+
# Make sure that whatever you are producing as a score can be converted
132+
# to JSON to be sent back as the API response
133+
# in the example here, forecast_horizon (the number of time units to forecast) is the input to scoring_function.
134+
# the output is a tibble
135+
# we are converting some of the output types so they work in JSON
136+
137+
138+
#* @param forecast_horizon
139+
#* @post /score
140+
function(forecast_horizon) {
141+
scoring_function(as.numeric(forecast_horizon)) |>
142+
tibble::as_tibble() |>
143+
dplyr::transmute(period = as.character(yr_wk),
144+
dist = as.character(logmove),
145+
forecast = .mean) |>
146+
jsonlite::toJSON()
147+
}
148+
149+
```
150+
151+
### start_plumber.R
152+
153+
The file **start_plumber.R** is the R script that gets run when the container starts, and it calls your **plumber.R** script. Use the following script as-is.
154+
155+
```r
156+
entry_script_path <- paste0(Sys.getenv('AML_APP_ROOT'),'/', Sys.getenv('AZUREML_ENTRY_SCRIPT'))
157+
158+
pr <- plumber::plumb(entry_script_path)
159+
160+
args <- list(host = '0.0.0.0', port = 8000);
161+
162+
if (packageVersion('plumber') >= '1.0.0') {
163+
pr$setDocs(TRUE)
164+
} else {
165+
args$swagger <- TRUE
166+
}
167+
168+
do.call(pr$run, args)
169+
```
170+
171+
## Build container
172+
173+
These steps assume you have an Azure Container Registry associated with your workspace, which is created when you create your first custom environment. To see if you have a custom environment:
174+
175+
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
176+
1. Select your workspace if necessary.
177+
1. On the left navigation, select **Environments**.
178+
1. On the top, select **Custom environments**.
179+
1. If you see custom environments, nothing more is needed.
180+
1. If you don't see any custom environments, create [an R environment](how-to-r-modify-script-for-production.md#create-an-environment), or any other custom environment. (You *won't* use this environment for deployment, but you *will* use the container registry that is also created for you.)
181+
182+
Once you have verified that you have at least one custom environment, use the following steps to build a container.
183+
184+
1. Open a terminal window and sign in to Azure. If you're doing this from an [Azure Machine Learning compute instance](quickstart-create-resources.md#create-compute-instance), use:
185+
186+
```azurecli
187+
az login --identity
188+
```
189+
190+
If you're not on the compute instance, omit `--identity` and follow the prompt to open a browser window to authenticate.
191+
192+
1. Make sure you have the most recent versions of the CLI and the `ml` extension:
193+
194+
```azurecli
195+
az upgrade
196+
```
197+
198+
1. If you have multiple Azure subscriptions, set the active subscription to the one you're using for your workspace. (You can skip this step if you only have access to a single subscription.) Replace `<SUBSCRIPTION-NAME>` with your subscription name. Also remove the brackets `<>`.
199+
200+
```azurecli
201+
az account set --subscription "<SUBSCRIPTION-NAME>"
202+
```
203+
204+
1. Set the default workspace. If you're doing this from a compute instance, you can use the following command as is. If you're on any other computer, substitute your resource group and workspace name instead. (You can find these values in [Azure Machine Learning studio](how-to-r-train-model.md#submit-the-job).)
205+
206+
```azurecli
207+
az configure --defaults group=$CI_RESOURCE_GROUP workspace=$CI_WORKSPACE
208+
```
209+
210+
1. Make sure you are in your project directory.
211+
212+
```bash
213+
cd r-deploy-azureml
214+
```
215+
216+
1. To build the image in the cloud, execute the following bash commands in your terminal. Replace `<IMAGE-NAME>` with the name you want to give the image.
217+
218+
If your workspace is in a virtual network, see [Enable Azure Container Registry (ACR)](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr) for additional steps to add `--image-build-compute` to the `az acr build` command in the last line of this code.
219+
220+
```azurecli
221+
WORKSPACE=$(az config get --query "defaults[?name == 'workspace'].value" -o tsv)
222+
ACR_NAME=$(az ml workspace show -n $WORKSPACE --query container_registry -o tsv | cut -d'/' -f9-)
223+
IMAGE_TAG=${ACR_NAME}.azurecr.io/<IMAGE-NAME>
224+
225+
az acr build ./docker-context -t $IMAGE_TAG -r $ACR_NAME
226+
```
227+
228+
> [!IMPORTANT]
229+
> It will take a few minutes for the image to be built. Wait until the build process is complete before proceeding to the next section. Don't close this terminal, you'll use it next to create the deployment.
230+
231+
The `az acr` command will automatically upload your docker-context folder - that contains the artifacts to build the image - to the cloud where the image will be built and hosted in an Azure Container Registry.
232+
233+
234+
## Deploy model
235+
236+
In this section of the article, you'll define and create an [endpoint and deployment](concept-endpoints.md) to deploy the model and image built in the previous steps to a managed online endpoint.
237+
238+
An *endpoint* is an HTTPS endpoint that clients - such as an application - can call to receive the scoring output of a trained model. It provides:
239+
240+
> [!div class="checklist"]
241+
> - Authentication using "key & token" based auth
242+
> - SSL termination
243+
> - A stable scoring URI (endpoint-name.region.inference.ml.Azure.com)
244+
245+
A *deployment* is a set of resources required for hosting the model that does the actual scoring. A **single** *endpoint* can contain **multiple** *deployments*. The load balancing capabilities of Azure Machine Learning managed endpoints allows you to give any percentage of traffic to each deployment. Traffic allocation can be used to do safe rollout blue/green deployments by balancing requests between different instances.
246+
247+
### Create managed online endpoint
248+
249+
1. In your project directory, add the **endpoint.yml** file with the following code. Replace `<ENDPOINT-NAME>` with the name you want to give your managed endpoint.
250+
251+
```yml
252+
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
253+
name: <ENDPOINT-NAME>
254+
auth_mode: aml_token
255+
```
256+
257+
1. Using the same terminal where you built the image, execute the following CLI command to create an endpoint:
258+
259+
```azurecli
260+
az ml online-endpoint create -f endpoint.yml
261+
```
262+
263+
1. Leave the terminal open to continue using it in the next section.
264+
265+
### Create deployment
266+
267+
1. To create your deployment, add the following code to the **deployment.yml** file.
268+
269+
* Replace `<ENDPOINT-NAME>` with the endpoint name you defined in the **endpoint.yml** file
270+
* Replace `<DEPLOYMENT-NAME>` with the name you want to give the deployment
271+
* Replace `<MODEL-URI>` with the registered model's URI in the form of `azureml:modelname@latest`
272+
* Replace `<IMAGE-TAG>` with the value from:
273+
274+
```bash
275+
echo $IMAGE_TAG
276+
```
277+
278+
```yml
279+
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
280+
name: <DEPLOYMENT-NAME>
281+
endpoint_name: <ENDPOINT-NAME>
282+
code_configuration:
283+
code: ./src
284+
scoring_script: plumber.R
285+
model: <MODEL-URI>
286+
environment:
287+
image: <IMAGE-TAG>
288+
inference_config:
289+
liveness_route:
290+
port: 8000
291+
path: /live
292+
readiness_route:
293+
port: 8000
294+
path: /ready
295+
scoring_route:
296+
port: 8000
297+
path: /score
298+
instance_type: Standard_DS2_v2
299+
instance_count: 1
300+
```
301+
302+
1. Next, in your terminal execute the following CLI command to create the deployment (notice that you're setting 100% of the traffic to this model):
303+
304+
```azurecli
305+
az ml online-deployment create -f r-deployment.yml --all-traffic --skip-script-validation
306+
```
307+
308+
> [!NOTE]
309+
> It may take several minutes for the service to be deployed. Wait until deployment is finished before proceeding to the next section.
310+
311+
## Test
312+
313+
Once your deployment has been successfully created, you can test the endpoint using studio or the CLI:
314+
315+
# [Studio](#tab/azure-studio)
316+
317+
Navigate to the [Azure Machine Learning studio](https://ml.azure.com) and select from the left-hand menu **Endpoints**. Next, select the **r-endpoint-iris** you created earlier.
318+
319+
Enter the following json into the **Input data to rest real-time endpoint** textbox:
320+
321+
```json
322+
{
323+
"forecast_horizon" : [2]
324+
}
325+
```
326+
327+
Select **Test**. You should see the following output:
328+
329+
:::image type="content" source="media/how-to-r-deploy-an-r-model/test-deployment.png" alt-text="Screenshot shows results from testing a model." lightbox="media/how-to-r-deploy-an-r-model/test-deployment.png":::
330+
331+
# [Azure CLI](#tab/cli)
332+
333+
### Create a sample request
334+
335+
In your project parent folder, create a file called **sample_request.json** and populate it with:
336+
337+
338+
```json
339+
{
340+
"forecast_horizon" : [2]
341+
}
342+
```
343+
344+
### Invoke the endpoint
345+
346+
Invoke the request. This example uses the name r-endpoint-forecast:
347+
348+
```azurecli
349+
az ml online-endpoint invoke --name r-endpoint-forecast --request-file sample_request.json
350+
```
351+
352+
---
353+
354+
## Clean-up resources
355+
356+
Now that you've successfully scored with your endpoint, you can delete it so you don't incur ongoing cost:
357+
358+
```azurecli
359+
az ml online-endpoint delete --name r-endpoint-forecast
360+
```
361+
362+
## Next steps
363+
364+
For more information about using R with Azure Machine Learning, see [Overview of R capabilities in Azure Machine Learning](how-to-r-overview-r-capabilities.md)

0 commit comments

Comments
 (0)