Skip to content

Commit 9ff7dfe

Browse files
authored
Merge pull request #106674 from TimShererWithAquent/us1669724h
Azure CLI syntax blocks.
2 parents c28c435 + a9c3ffd commit 9ff7dfe

13 files changed

+114
-78
lines changed

articles/aks/certificate-rotation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,13 +47,13 @@ AKS generates and uses the following certificates, Certificate Authorities, and
4747
4848
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your local machine.
4949
50-
```console
50+
```azurecli
5151
az aks get-credentials -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
5252
```
5353
5454
Use `az aks rotate-certs` to rotate all certificates, CAs, and SAs on your cluster.
5555

56-
```console
56+
```azurecli
5757
az aks rotate-certs -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME
5858
```
5959

@@ -69,7 +69,7 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
6969

7070
Update the certificate used by `kubectl` by running `az aks get-credentials`.
7171

72-
```console
72+
```azurecli
7373
az aks get-credentials -g $RESOURCE_GROUP_NAME -n $CLUSTER_NAME --overwrite-existing
7474
```
7575

articles/aks/cluster-container-registry-integration.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic
3737
# Create an AKS cluster with ACR integration
3838
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR
3939
```
40+
4041
Alternatively, you can specify the ACR name using an ACR resource ID, which has the following format:
4142

4243
`/subscriptions/\<subscription-id\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.ContainerRegistry/registries/\<name\>`
@@ -54,17 +55,22 @@ Integrate an existing ACR with existing AKS clusters by supplying valid values f
5455
```azurecli
5556
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acrName>
5657
```
58+
5759
or,
58-
```
60+
61+
```azurecli
5962
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id>
6063
```
6164

6265
You can also remove the integration between an ACR and an AKS cluster with the following
66+
6367
```azurecli
6468
az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acrName>
6569
```
70+
6671
or
67-
```
72+
73+
```azurecli
6874
az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id>
6975
```
7076

@@ -89,7 +95,7 @@ az aks get-credentials -g myResourceGroup -n myAKSCluster
8995

9096
Create a file called **acr-nginx.yaml** that contains the following:
9197

92-
```
98+
```yaml
9399
apiVersion: apps/v1
94100
kind: Deployment
95101
metadata:
@@ -114,16 +120,20 @@ spec:
114120
```
115121
116122
Next, run this deployment in your AKS cluster:
117-
```
123+
124+
```console
118125
kubectl apply -f acr-nginx.yaml
119126
```
120127

121128
You can monitor the deployment by running:
122-
```
129+
130+
```console
123131
kubectl get pods
124132
```
133+
125134
You should have two running pods.
126-
```
135+
136+
```output
127137
NAME READY STATUS RESTARTS AGE
128138
nginx0-deployment-669dfc4d4b-x74kr 1/1 Running 0 20s
129139
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s

articles/aks/configure-kubenet.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ Use *Azure CNI* when:
8080

8181
- You have available IP address space.
8282
- Most of the pod communication is to resources outside of the cluster.
83-
- You dont want to manage the UDRs.
83+
- You don't want to manage the UDRs.
8484
- You need AKS advanced features such as virtual nodes or Azure Network Policy. Use [Calico network policies][calico-network-policies].
8585

8686
For more information to help you decide which network model to use, see [Compare network models and their support scope][network-comparisons].
@@ -114,9 +114,11 @@ az ad sp create-for-rbac --skip-assignment
114114

115115
The following example output shows the application ID and password for your service principal. These values are used in additional steps to assign a role to the service principal and then create the AKS cluster:
116116

117-
```console
118-
$ az ad sp create-for-rbac --skip-assignment
117+
```azurecli
118+
az ad sp create-for-rbac --skip-assignment
119+
```
119120

121+
```output
120122
{
121123
"appId": "476b3636-5eda-4c0e-9751-849e70b5cfad",
122124
"displayName": "azure-cli-2019-01-09-22-29-24",

articles/aks/scale-cluster.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,7 @@ az aks show --resource-group myResourceGroup --name myAKSCluster --query agentPo
2222

2323
The following example output shows that the *name* is *nodepool1*:
2424

25-
```console
26-
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query agentPoolProfiles
27-
25+
```output
2826
[
2927
{
3028
"count": 1,

articles/aks/troubleshooting.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ The reason for the warnings on the dashboard is that the cluster is now enabled
5555

5656
The easiest way to access your service outside the cluster is to run `kubectl proxy`, which proxies requests sent to your localhost port 8001 to the Kubernetes API server. From there, the API server can proxy to your service: `http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/node?namespace=default`.
5757

58-
If you dont see the Kubernetes dashboard, check whether the `kube-proxy` pod is running in the `kube-system` namespace. If it isn't in a running state, delete the pod and it will restart.
58+
If you don't see the Kubernetes dashboard, check whether the `kube-proxy` pod is running in the `kube-system` namespace. If it isn't in a running state, delete the pod and it will restart.
5959

6060
## I can't get logs by using kubectl logs or I can't connect to the API server. I'm getting "Error from server: error dialing backend: dial tcp…". What should I do?
6161

@@ -116,7 +116,7 @@ Naming restrictions are implemented by both the Azure platform and AKS. If a res
116116
* The AKS *MC_* resource group name combines resource group name and resource name. The auto-generated syntax of `MC_resourceGroupName_resourceName_AzureRegion` must be no greater than 80 chars. If needed, reduce the length of your resource group name or AKS cluster name.
117117
* The *dnsPrefix* must start and end with alphanumeric values and must be between 1-54 characters. Valid characters include alphanumeric values and hyphens (-). The *dnsPrefix* can't include special characters such as a period (.).
118118

119-
## Im receiving errors when trying to create, update, scale, delete or upgrade cluster, that operation is not allowed as another operation is in progress.
119+
## I'm receiving errors when trying to create, update, scale, delete or upgrade cluster, that operation is not allowed as another operation is in progress.
120120

121121
*This troubleshooting assistance is directed from aka.ms/aks-pending-operation*
122122

@@ -293,7 +293,7 @@ If you are using a version of Kubernetes that does not have the fix for this iss
293293
In some cases, if an Azure Disk detach operation fails on the first attempt, it will not retry the detach operation and will remain attached to the original node VM. This error can occur when moving a disk from one node to another. For example:
294294

295295
```console
296-
[Warning] AttachVolume.Attach failed for volume pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9 : Attach volume kubernetes-dynamic-pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9" to instance /subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/virtualMachines/aks-agentpool-57634498-0 failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status= Code=ConflictingUserInput Message=Disk /subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9 cannot be attached as the disk is already owned by VM /subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/virtualMachines/aks-agentpool-57634498-1’.”
296+
[Warning] AttachVolume.Attach failed for volume "pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9" : Attach volume "kubernetes-dynamic-pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9" to instance "/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/virtualMachines/aks-agentpool-57634498-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status= Code="ConflictingUserInput" Message="Disk '/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-7b7976d7-3a46-11e9-93d5-dee1946e6ce9' cannot be attached as the disk is already owned by VM '/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Compute/virtualMachines/aks-agentpool-57634498-1'."
297297
```
298298

299299
This issue has been fixed in the following versions of Kubernetes:
@@ -343,12 +343,12 @@ This issue has been fixed in the following versions of Kubernetes:
343343
If you are using a version of Kubernetes that does not have the fix for this issue and your node VM is in a failed state, you can mitigate the issue by manually updating the VM status using one of the below:
344344

345345
* For an availability set-based cluster:
346-
```console
346+
```azurecli
347347
az vm update -n <VM_NAME> -g <RESOURCE_GROUP_NAME>
348348
```
349349

350350
* For a VMSS-based cluster:
351-
```console
351+
```azurecli
352352
az vmss update-instances -g <RESOURCE_GROUP_NAME> --name <VMSS_NAME> --instance-id <ID>
353353
```
354354

articles/aks/use-multiple-node-pools.md

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -89,9 +89,7 @@ az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluste
8989

9090
The following example output shows that *mynodepool* has been successfully created with three nodes in the node pool. When the AKS cluster was created in the previous step, a default *nodepool1* was created with a node count of *2*.
9191

92-
```console
93-
$ az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluster
94-
92+
```output
9593
[
9694
{
9795
...
@@ -144,9 +142,11 @@ az aks nodepool upgrade \
144142

145143
List the status of your node pools again using the [az aks node pool list][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Upgrading* state to *1.15.7*:
146144

147-
```console
148-
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
145+
```azurecli
146+
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
147+
```
149148

149+
```output
150150
[
151151
{
152152
...
@@ -230,9 +230,11 @@ az aks nodepool scale \
230230

231231
List the status of your node pools again using the [az aks node pool list][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Scaling* state with a new count of *5* nodes:
232232

233-
```console
234-
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
233+
```azurecli
234+
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
235+
```
235236

237+
```output
236238
[
237239
{
238240
...
@@ -280,9 +282,11 @@ az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name myn
280282

281283
The following example output from the [az aks node pool list][az-aks-nodepool-list] command shows that *mynodepool* is in the *Deleting* state:
282284

283-
```console
284-
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
285+
```azurecli
286+
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
287+
```
285288

289+
```output
286290
[
287291
{
288292
...
@@ -333,9 +337,11 @@ az aks nodepool add \
333337

334338
The following example output from the [az aks node pool list][az-aks-nodepool-list] command shows that *gpunodepool* is *Creating* nodes with the specified *VmSize*:
335339

336-
```console
337-
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
340+
```azurecli
341+
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
342+
```
338343

344+
```output
339345
[
340346
{
341347
...
@@ -371,8 +377,10 @@ It takes a few minutes for the *gpunodepool* to be successfully created.
371377
You now have two node pools in your cluster - the default node pool initially created, and the GPU-based node pool. Use the [kubectl get nodes][kubectl-get] command to view the nodes in your cluster. The following example output shows the nodes:
372378

373379
```console
374-
$ kubectl get nodes
380+
kubectl get nodes
381+
```
375382

383+
```output
376384
NAME STATUS ROLES AGE VERSION
377385
aks-gpunodepool-28993262-vmss000000 Ready agent 4m22s v1.15.7
378386
aks-nodepool1-28993262-vmss000000 Ready agent 115m v1.15.7
@@ -427,8 +435,10 @@ kubectl apply -f gpu-toleration.yaml
427435
It takes a few seconds to schedule the pod and pull the NGINX image. Use the [kubectl describe pod][kubectl-describe] command to view the pod status. The following condensed example output shows the *sku=gpu:NoSchedule* toleration is applied. In the events section, the scheduler has assigned the pod to the *aks-gpunodepool-28993262-vmss000000* GPU-based node:
428436

429437
```console
430-
$ kubectl describe pod mypod
438+
kubectl describe pod mypod
439+
```
431440

441+
```output
432442
[...]
433443
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
434444
node.kubernetes.io/unreachable:NoExecute for 300s
@@ -559,9 +569,11 @@ az aks nodepool add \
559569

560570
The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *tagnodepool* is *Creating* nodes with the specified *tag*:
561571

562-
```console
563-
$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
572+
```azurecli
573+
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
574+
```
564575

576+
```output
565577
[
566578
{
567579
...

articles/container-service/kubernetes/container-service-kubernetes-datadog.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,22 +21,22 @@ It also assumes that you have the `az` Azure cli and `kubectl` tools installed.
2121

2222
You can test if you have the `az` tool installed by running:
2323

24-
```console
25-
$ az --version
24+
```azurecli
25+
az --version
2626
```
2727

2828
If you don't have the `az` tool installed, there are instructions [here](https://github.com/azure/azure-cli#installation).
2929

3030
You can test if you have the `kubectl` tool installed by running:
3131

3232
```console
33-
$ kubectl version
33+
kubectl version
3434
```
3535

3636
If you don't have `kubectl` installed, you can run:
3737

38-
```console
39-
$ az acs kubernetes install-cli
38+
```azurecli
39+
az acs kubernetes install-cli
4040
```
4141

4242
## DataDog

articles/container-service/kubernetes/container-service-kubernetes-oms.md

Lines changed: 27 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ It also assumes that you have the `az` Azure cli and `kubectl` tools installed.
2424

2525
You can test if you have the `az` tool installed by running:
2626

27-
```console
28-
$ az --version
27+
```azurecli
28+
az --version
2929
```
3030

3131
If you don't have the `az` tool installed, there are instructions [here](https://github.com/azure/azure-cli#installation).
@@ -34,21 +34,24 @@ Alternatively, you can use [Azure Cloud Shell](https://docs.microsoft.com/azure/
3434
You can test if you have the `kubectl` tool installed by running:
3535

3636
```console
37-
$ kubectl version
37+
kubectl version
3838
```
3939

4040
If you don't have `kubectl` installed, you can run:
41-
```console
42-
$ az acs kubernetes install-cli
41+
42+
```azurecli
43+
az acs kubernetes install-cli
4344
```
4445

4546
To test if you have kubernetes keys installed in your kubectl tool you can run:
47+
4648
```console
47-
$ kubectl get nodes
49+
kubectl get nodes
4850
```
4951

5052
If the above command errors out, you need to install kubernetes cluster keys into your kubectl tool. You can do that with the following command:
51-
```console
53+
54+
```azurecli
5255
RESOURCE_GROUP=my-resource-group
5356
CLUSTER_NAME=my-acs-name
5457
az acs kubernetes get-credentials --resource-group=$RESOURCE_GROUP --name=$CLUSTER_NAME
@@ -91,7 +94,7 @@ Once you have added your workspace ID and key to the DaemonSet configuration, yo
9194
on your cluster with the `kubectl` command-line tool:
9295

9396
```console
94-
$ kubectl create -f oms-daemonset.yaml
97+
kubectl create -f oms-daemonset.yaml
9598
```
9699

97100
### Installing the Log Analytics agent using a Kubernetes Secret
@@ -102,17 +105,24 @@ To protect your Log Analytics workspace ID and key you can use Kubernetes Secret
102105
- secret template - secret-template.yaml
103106
- DaemonSet YAML file - omsagent-ds-secrets.yaml
104107
- Run the script. The script will ask for the Log Analytics Workspace ID and Primary Key. Insert that and the script will create a secret yaml file so you can run it.
105-
```
106-
#> sudo bash ./secret-gen.sh
108+
109+
```console
110+
sudo bash ./secret-gen.sh
107111
```
108112

109113
- Create the secrets pod by running the following:
110-
```kubectl create -f omsagentsecret.yaml```
114+
115+
```console
116+
kubectl create -f omsagentsecret.yaml
117+
```
111118

112119
- To check, run the following:
113120

121+
```console
122+
kubectl get secrets
114123
```
115-
root@ubuntu16-13db:~# kubectl get secrets
124+
125+
```output
116126
NAME TYPE DATA AGE
117127
default-token-gvl91 kubernetes.io/service-account-token 3 50d
118128
omsagent-secret Opaque 2 1d
@@ -130,7 +140,11 @@ To protect your Log Analytics workspace ID and key you can use Kubernetes Secret
130140
KEY: 88 bytes
131141
```
132142

133-
- Create your omsagent daemon-set by running ```kubectl create -f omsagent-ds-secrets.yaml```
143+
- Create your omsagent daemon-set by running the following:
144+
145+
```console
146+
kubectl create -f omsagent-ds-secrets.yaml
147+
```
134148

135149
### Conclusion
136150
That's it! After a few minutes, you should be able to see data flowing to your Log Analytics dashboard.

0 commit comments

Comments
 (0)