You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/vulnerability-management.md
+8-53Lines changed: 8 additions & 53 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,43 +48,18 @@ Azure AI Studio releases updates for supported images every two weeks to address
48
48
49
49
Patched images are released under a new immutable tag and an updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version might be a tradeoff between security and environment reproducibility for your machine learning job.
50
50
51
-
<!--## Managing environments and container images
51
+
## Managing environments and container images
52
52
53
-
Q: Do we still need this section?
53
+
In Azure AI Studio, Docker images are used to provide a runtime environment for [prompt flow deployments](../how-to/flow-deploy.md). The images are built from a base image that Azure AI Studio provides.
54
54
55
-
Reproducibility is a key aspect of software development and machine learning experimentation. The [Azure Machine Learning environment](concept-environments.md) component's primary focus is to guarantee reproducibility of the environment where the user's code is executed. To ensure reproducibility for any machine learning job, earlier built images are pulled to the compute nodes without the need for rematerialization.
55
+
Although Azure AI Studio patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
56
56
57
-
Although Azure Machine Learning patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
57
+
By default, dependencies are layered on top of base images when you're building an image. After you install more dependencies on top of the Microsoft-provided images, vulnerability management becomes your responsibility.
58
58
59
-
By default, dependencies are layered on top of base images that Azure Machine Learning provides when you're building environments. You can also use your own base images when you're using environments in Azure Machine Learning. After you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
59
+
Associated with your AI hub resource is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it when deployment is triggered for the corresponding environment.
60
60
61
-
Associated with your Azure Machine Learning workspace is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it if experimentation or deployment is triggered for the corresponding environment.
61
+
The AI hub doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](/azure/defender-for-cloud/defender-for-container-registries-usage) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](/azure/defender-for-cloud/workflow-automation).
62
62
63
-
Azure Machine Learning doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](../defender-for-cloud/workflow-automation.md).
64
-
65
-
## Using a private package repository
66
-
67
-
Q: Do we need this section?
68
-
69
-
Azure Machine Learning uses Conda and Pip to install Python packages. By default, Azure Machine Learning downloads packages from public repositories. If your organization requires you to source packages only from private repositories like Azure DevOps feeds, you can override the Conda and Pip configuration as part of your base images and your environment configurations for compute instances.
70
-
71
-
The following example configuration shows how to remove the default channels and add your own private Conda and Pip feeds. Consider using [compute instance setup scripts](./how-to-customize-compute-instance.md) for automation.
# Configure Pip private indexes and ensure that the client trusts your host
80
-
RUN pip config set global.index https://my.private.pypi.feed/repository/myfeed/pypi/ \
81
-
&& pip config set global.index-url https://my.private.pypi.feed/repository/myfeed/simple/
82
-
83
-
# In case your feed host isn't secured through SSL
84
-
RUN pip config set global.trusted-host http://my.private.pypi.feed/
85
-
```
86
-
87
-
To learn how to specify your own base images in Azure Machine Learning, see [Create an environment from a Docker build context](how-to-use-environments.md#use-your-own-dockerfile). For more information on configuring Conda environments, see [Creating an environment file manually](https://docs.conda.io/projects/conda/en/4.6.1/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually) on the Conda site. -->
88
63
89
64
## Vulnerability management on compute hosts
90
65
@@ -149,32 +124,12 @@ In the following conditions, cluster nodes don't scale down, so they can't get t
149
124
150
125
You're responsible for scaling down non-idle cluster nodes to get the latest OS VM image updates. Azure Machine Learning doesn't stop any running workloads on compute nodes to issue VM updates. Temporarily change the minimum nodes to zero and allow the cluster to reduce to zero nodes. -->
151
126
152
-
### Managed online endpoints
127
+
### Endpoints
153
128
154
-
Managed online endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month.
129
+
Endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month.
155
130
156
131
Compute nodes are automatically upgraded to the latest VM image version when that version is released. You don't need to take any action.
157
132
158
-
<!-- ### Customer-managed Kubernetes clusters
159
-
160
-
[Kubernetes compute](how-to-attach-kubernetes-anywhere.md) lets you configure Kubernetes clusters to train, perform inference, and manage models in Azure Machine Learning.
161
-
162
-
Because you manage the environment with Kubernetes, management of both OS VM vulnerabilities and container image vulnerabilities is your responsibility.
163
-
164
-
Azure Machine Learning frequently publishes new versions of Azure Machine Learning extension container images in Microsoft Artifact Registry. Microsoft is responsible for ensuring that new image versions are free from vulnerabilities. [Each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md) fixes vulnerabilities.
165
-
166
-
When your clusters run jobs without interruption, running jobs might run outdated container image versions. After you upgrade the `amlarc` extension to a running cluster, newly submitted jobs start to use the latest image version. When you're upgrading the `amlarc` extension to its latest version, clean up the old container image versions from the clusters as required.
167
-
168
-
To observe whether your Azure Arc cluster is running the latest version of `amlarc`, use the Azure portal. Under your Azure Arc resource of the type **Kubernetes - Azure Arc**, go to **Extensions** to find the version of the `amlarc` extension.
169
-
170
-
## AutoML and Designer environments
171
-
172
-
For code-based training experiences, you control which Azure Machine Learning environment to use. With AutoML and the designer, the environment is encapsulated as part of the service. These types of jobs can run on computes that you configure, to allow for extra controls such as network isolation.
173
-
174
-
AutoML jobs run on environments that layer on top of Azure Machine Learning [base Docker images](https://github.com/Azure/AzureML-Containers).
175
-
176
-
Designer jobs are compartmentalized into [components](concept-component.md). Each component has its own environment that layers on top of the Azure Machine Learning base Docker images. For more information on components, see the [component reference](./component-reference-v2/component-reference-v2.md). -->
177
-
178
133
## Next steps
179
134
180
135
<!-- * [Azure Machine Learning repository for base images](https://github.com/Azure/AzureML-Containers)
0 commit comments