You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-ml.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ Azure Cache for Redis is performant and scalable. When paired with an Azure Mach
33
33
> *`model` - The registered model that will be deployed.
34
34
> *`inference_config` - The inference configuration for the model.
35
35
>
36
-
> For more information on setting these variables, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md).
36
+
> For more information on setting these variables, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
37
37
38
38
## Create an Azure Cache for Redis instance
39
39
@@ -119,7 +119,7 @@ def run(data):
119
119
return error
120
120
```
121
121
122
-
For more information on entry script, see [Define scoring code.](../machine-learning/how-to-deploy-and-where.md?tabs=python#define-an-entry-script)
122
+
For more information on entry script, see [Define scoring code.](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
123
123
124
124
***Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
125
125
@@ -144,7 +144,7 @@ These entities are encapsulated into an **inference configuration**. The inferen
144
144
145
145
For more information on environments, see [Create and manage environments for training and deployment](../machine-learning/how-to-use-environments.md).
146
146
147
-
For more information on inference configuration, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md?tabs=python#define-an-inference-configuration).
147
+
For more information on inference configuration, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
148
148
149
149
> [!IMPORTANT]
150
150
> When deploying to Functions, you do not need to create a **deployment configuration**.
To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions) or the specific package function for the trigger you want to use. The following code snippet demonstrates how to create a new package with an HTTP trigger from the model and inference configuration:
163
163
164
164
> [!NOTE]
165
-
> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-and-where.md).
165
+
> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
0 commit comments