Skip to content

Commit c3b3cb9

Browse files
authored
Merge pull request #213853 from MicrosoftDocs/release-ignite-mariner-v2
[Ship Room] Release ignite mariner v2
2 parents 6a1bb48 + 8530579 commit c3b3cb9

File tree

6 files changed

+281
-0
lines changed

6 files changed

+281
-0
lines changed

articles/aks/TOC.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,8 @@
248248
href: start-stop-nodepools.md
249249
- name: Resize node pools
250250
href: resize-node-pool.md
251+
- name: Use the Mariner container host
252+
href: use-mariner.md
251253
- name: Deploy AKS with Terraform
252254
href: /azure/developer/terraform/create-k8s-cluster-with-tf-and-aks
253255
maintainContext: true

articles/aks/cluster-configuration.md

Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -123,6 +123,175 @@ az aks nodepool add --name ephemeral --cluster-name myAKSCluster --resource-grou
123123
124124
If you want to create node pools with network-attached OS disks, you can do so by specifying `--node-osdisk-type Managed`.
125125

126+
## Mariner OS
127+
128+
Mariner can be deployed on AKS through Azure CLI or ARM templates.
129+
130+
### Prerequisites
131+
132+
1. You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
133+
2. You need the `aks-preview` Azure CLI extension for the ability to select the Mariner 2.0 operating system SKU. Run `az extension remove --name aks-preview` to clear any previous versions, then run `az extension add --name aks-preview`.
134+
3. If you don't already have kubectl installed, install it through Azure CLI using `az aks install-cli` or follow the [upstream instructions](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/).
135+
136+
### Deploy an AKS Mariner cluster with Azure CLI
137+
138+
Use the following example commands to create a Mariner cluster.
139+
140+
```azurecli
141+
az group create --name MarinerTest --location eastus
142+
143+
az aks create --name testMarinerCluster --resource-group MarinerTest --os-sku mariner
144+
145+
az aks get-credentials --resource-group MarinerTest --name testMarinerCluster
146+
147+
kubectl get pods --all-namespaces
148+
```
149+
150+
### Deploy an AKS Mariner cluster with an ARM template
151+
152+
To add Mariner to an existing ARM template, you need to add `"osSKU": "mariner"` and `"mode": "System"` to `agentPoolProfiles` and set the apiVersion to 2021-03-01 or newer (`"apiVersion": "2021-03-01"`). The following deployment uses the ARM template "marineraksarm.yml".
153+
154+
```yml
155+
{
156+
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
157+
"contentVersion": "1.0.0.1",
158+
"parameters": {
159+
"clusterName": {
160+
"type": "string",
161+
"defaultValue": "marinerakscluster",
162+
"metadata": {
163+
"description": "The name of the Managed Cluster resource."
164+
}
165+
},
166+
"location": {
167+
"type": "string",
168+
"defaultValue": "[resourceGroup().location]",
169+
"metadata": {
170+
"description": "The location of the Managed Cluster resource."
171+
}
172+
},
173+
"dnsPrefix": {
174+
"type": "string",
175+
"metadata": {
176+
"description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
177+
}
178+
},
179+
"osDiskSizeGB": {
180+
"type": "int",
181+
"defaultValue": 0,
182+
"minValue": 0,
183+
"maxValue": 1023,
184+
"metadata": {
185+
"description": "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
186+
}
187+
},
188+
"agentCount": {
189+
"type": "int",
190+
"defaultValue": 3,
191+
"minValue": 1,
192+
"maxValue": 50,
193+
"metadata": {
194+
"description": "The number of nodes for the cluster."
195+
}
196+
},
197+
"agentVMSize": {
198+
"type": "string",
199+
"defaultValue": "Standard_DS2_v2",
200+
"metadata": {
201+
"description": "The size of the Virtual Machine."
202+
}
203+
},
204+
"linuxAdminUsername": {
205+
"type": "string",
206+
"metadata": {
207+
"description": "User name for the Linux Virtual Machines."
208+
}
209+
},
210+
"sshRSAPublicKey": {
211+
"type": "string",
212+
"metadata": {
213+
"description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'"
214+
}
215+
},
216+
"osType": {
217+
"type": "string",
218+
"defaultValue": "Linux",
219+
"allowedValues": [
220+
"Linux"
221+
],
222+
"metadata": {
223+
"description": "The type of operating system."
224+
}
225+
},
226+
"osSKU": {
227+
"type": "string",
228+
"defaultValue": "mariner",
229+
"allowedValues": [
230+
"mariner",
231+
"Ubuntu",
232+
],
233+
"metadata": {
234+
"description": "The Linux SKU to use."
235+
}
236+
}
237+
},
238+
"resources": [
239+
{
240+
"type": "Microsoft.ContainerService/managedClusters",
241+
"apiVersion": "2021-03-01",
242+
"name": "[parameters('clusterName')]",
243+
"location": "[parameters('location')]",
244+
"properties": {
245+
"dnsPrefix": "[parameters('dnsPrefix')]",
246+
"agentPoolProfiles": [
247+
{
248+
"name": "agentpool",
249+
"mode": "System",
250+
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
251+
"count": "[parameters('agentCount')]",
252+
"vmSize": "[parameters('agentVMSize')]",
253+
"osType": "[parameters('osType')]",
254+
"osSKU": "[parameters('osSKU')]",
255+
"storageProfile": "ManagedDisks"
256+
}
257+
],
258+
"linuxProfile": {
259+
"adminUsername": "[parameters('linuxAdminUsername')]",
260+
"ssh": {
261+
"publicKeys": [
262+
{
263+
"keyData": "[parameters('sshRSAPublicKey')]"
264+
}
265+
]
266+
}
267+
}
268+
},
269+
"identity": {
270+
"type": "SystemAssigned"
271+
}
272+
}
273+
],
274+
"outputs": {
275+
"controlPlaneFQDN": {
276+
"type": "string",
277+
"value": "[reference(parameters('clusterName')).fqdn]"
278+
}
279+
}
280+
}
281+
```
282+
283+
Create this file on your system and fill it with the contents of the Mariner AKS YAML file.
284+
285+
```azurecli
286+
az group create --name MarinerTest --location eastus
287+
288+
az deployment group create --resource-group MarinerTest --template-file marineraksarm.yml --parameters clusterName=testMarinerCluster dnsPrefix=marineraks1 linuxAdminUsername=azureuser sshRSAPublicKey=`<contents of your id_rsa.pub>`
289+
290+
az aks get-credentials --resource-group MarinerTest --name testMarinerCluster
291+
292+
kubectl get pods --all-namespaces
293+
```
294+
126295
## Custom resource group name
127296

128297
When you deploy an Azure Kubernetes Service cluster in Azure, a second resource group gets created for the worker nodes. By default, AKS will name the node resource group `MC_resourcegroupname_clustername_location`, but you can also provide your own name.

articles/aks/index.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@ landingContent:
2626
url: intro-kubernetes.md
2727
- linkListType: whats-new
2828
links:
29+
- text: Mariner container host for AKS
30+
url: use-mariner.md
2931
- text: Vertical Pod Autoscaler (preview)
3032
url: vertical-pod-autoscaler.md
3133
- text: Workload identity (preview)

articles/aks/intro-kubernetes.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,12 @@ AKS supports the creation of Intel SGX-based, confidential computing node pools
8383

8484
For more information, see [Confidential computing nodes on AKS][conf-com-node].
8585

86+
### Mariner nodes
87+
88+
Mariner is an open-source Linux distribution created by Microsoft, and it’s now available for preview as a container host on Azure Kubernetes Service (AKS). The Mariner container host provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Mariner node pools in a new cluster, add Mariner node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Mariner nodes.
89+
90+
For more information, see [Use the Mariner container host on Azure Kubernetes Service (AKS)](use-mariner.md)
91+
8692
### Storage volume support
8793

8894
To support application workloads, you can mount static or dynamic storage volumes for persistent data. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either:

articles/aks/use-mariner.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
3+
title: Use the Mariner container host on Azure Kubernetes Service (AKS)
4+
description: Learn how to use the Mariner container host on Azure Kubernetes Service (AKS)
5+
services: container-service
6+
ms.topic: article
7+
ms.date: 09/22/2022
8+
---
9+
10+
# Use the Mariner container host on Azure Kubernetes Service (AKS)
11+
12+
Mariner is an open-source Linux distribution created by Microsoft, and it’s now available for preview as a container host on Azure Kubernetes Service (AKS). The Mariner container host provides reliability and consistency from cloud to edge across the AKS, AKS-HCI, and Arc products. You can deploy Mariner node pools in a new cluster, add Mariner node pools to your existing Ubuntu clusters, or migrate your Ubuntu nodes to Mariner nodes. To learn more about Mariner, see the [Mariner documentation][mariner-doc].
13+
14+
## Why use Mariner
15+
16+
The Mariner container host on AKS uses a native AKS image that provides one place to do all Linux development. Every package is built from source and validated, ensuring your services run on proven components. Mariner is lightweight, only including the necessary set of packages needed to run container workloads. It provides a reduced attack surface and eliminates patching and maintenance of unnecessary packages. At Mariner's base layer, it has a Microsoft hardened kernel tuned for Azure. Learn more about the [key capabilities of Mariner][mariner-capabilities].
17+
18+
## How to use Mariner on AKS
19+
20+
To get started using Mariner on AKS, see:
21+
22+
* [Creating a cluster with Mariner][mariner-cluster-config]
23+
* [Add a Mariner node pool to your existing cluster][mariner-node-pool]
24+
* [Ubuntu to Mariner migration][ubuntu-to-mariner]
25+
26+
## How to upgrade Mariner nodes
27+
28+
We recommend keeping your clusters up to date and secured by enabling automatic upgrades for your cluster. To enable automatic upgrades, see:
29+
30+
* [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][auto-upgrade-aks]
31+
* [Deploy kured in an AKS cluster][kured]
32+
33+
To manually upgrade the node-image on a cluster, you can run `az aks nodepool upgrade`:
34+
35+
```azurecli
36+
az aks nodepool upgrade \
37+
--resource-group myResourceGroup \
38+
--cluster-name myAKSCluster \
39+
--name myNodePool \
40+
--node-image-only
41+
```
42+
43+
## Regional availability
44+
45+
Mariner is available for use in the same regions as AKS.
46+
47+
## Limitations
48+
49+
Mariner currently has the following limitations:
50+
51+
* Mariner does not yet have image SKUs for GPU, ARM64, SGX, or FIPS.
52+
* Mariner does not yet have FedRAMP, FIPS, or CIS certification.
53+
* Mariner cannot yet be deployed through Azure portal or Terraform.
54+
* Some vulnerability tools may not support Mariner yet.
55+
* The Mariner container host is a Gen 2 image. Mariner does not plan to offer a Gen 1 SKU.
56+
* Node configurations are not yet supported.
57+
* Mariner is not yet supported in GitHub actions.
58+
* Mariner does not support AppArmor. Support for SELinux can be manually configured.
59+
* Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are confirmed to be supported.
60+
* AKS diagnostics does not yet support Mariner.
61+
62+
<!-- LINKS - Internal -->
63+
[mariner-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux
64+
[mariner-capabilities]: https://microsoft.github.io/CBL-Mariner/docs/#key-capabilities-of-cbl-mariner-linux
65+
[mariner-cluster-config]: cluster-configuration.md
66+
[mariner-node-pool]: use-multiple-node-pools.md
67+
[ubuntu-to-mariner]: use-multiple-node-pools.md
68+
[auto-upgrade-aks]: auto-upgrade-cluster.md
69+
[kured]: node-updates-kured.md

articles/aks/use-multiple-node-pools.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,38 @@ az aks nodepool add \
135135
--node-vm-size Standard_Dpds_v5
136136
```
137137

138+
### Add a Mariner node pool
139+
140+
Mariner is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. Mariner only includes the minimal set of packages needed for running container workloads, which improves boot times and overall performance.
141+
142+
You can add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
143+
144+
```azurecli
145+
az aks nodepool add \
146+
--resource-group myResourceGroup \
147+
--cluster-name myAKSCluster \
148+
--os-sku mariner
149+
```
150+
151+
### Migrate Ubuntu nodes to Mariner
152+
153+
Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
154+
155+
1. Add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
156+
157+
> [!NOTE]
158+
> When adding a new Mariner node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
159+
2. [Cordon the existing Ubuntu nodes][cordon-and-drain].
160+
3. [Drain the existing Ubuntu nodes][drain-nodes].
161+
4. Remove the existing Ubuntu nodes using the `az aks delete` command.
162+
163+
```azurecli
164+
az aks nodepool delete \
165+
--resource-group myResourceGroup \
166+
--cluster-name myAKSCluster \
167+
--name myNodePool
168+
```
169+
138170
### Add a node pool with a unique subnet
139171

140172
A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
@@ -833,3 +865,4 @@ az group delete --name myResourceGroup2 --yes --no-wait
833865
[use-labels]: use-labels.md
834866
[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
835867
[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
868+
[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes

0 commit comments

Comments
 (0)