Skip to content

Commit 405c362

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into updateFileLin
2 parents 5811597 + dfcbe4e commit 405c362

22 files changed

+334
-174
lines changed

articles/aks/supported-kubernetes-versions.md

Lines changed: 130 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -12,32 +12,138 @@ ms.author: saudas
1212

1313
# Supported Kubernetes versions in Azure Kubernetes Service (AKS)
1414

15-
The Kubernetes community releases minor versions roughly every three months. These releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are only intended for critical bug fixes in a minor version. These patch releases include fixes for security vulnerabilities or major bugs impacting a large number of customers and products running in production based on Kubernetes.
15+
The Kubernetes community releases minor versions roughly every three months. These releases include new features and
16+
improvements. Patch releases are more frequent (sometimes weekly) and are only intended for critical bug fixes in a
17+
minor version. These patch releases include fixes for security vulnerabilities or major bugs impacting a large number
18+
of customers and products running in production based on Kubernetes.
1619

17-
A new Kubernetes minor version is made available in [aks-engine][aks-engine] on day one. The AKS Service Level Objective (SLO) targets releasing the minor version for AKS clusters within 30 days, subject to the stability of the release.
20+
AKS aims to certify and release new Kubernetes versions within 30 days of an upstream release, subject to the stability
21+
of the release.
22+
23+
## Kubernetes versions
24+
25+
Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. This means that each version
26+
of Kubernetes follows this numbering scheme:
27+
28+
```
29+
[major].[minor].[patch]
30+
31+
Example:
32+
1.12.14
33+
1.12.15
34+
1.13.7
35+
```
36+
37+
Each number in the version indicates general compatibility with the previous version:
38+
39+
* Major versions change when incompatible API changes or backwards compatibility may be broken.
40+
* Minor versions change when functionality changes are made that are backwards compatible to the other minor releases.
41+
* Patch versions change when backwards-compatible bug fixes are made.
42+
43+
In general, users should endeavor to run the latest patch release of the minor version they are running, for example if
44+
your production cluster is on *1.13.6* and *1.13.7* is the latest available patch version available for the *1.13*
45+
series, you should upgrade to *1.13.7* as soon as you are able to ensure your cluster is fully patched and supported.
1846

1947
## Kubernetes version support policy
2048

2149
AKS supports four minor versions of Kubernetes:
2250

23-
- The current minor version that is released upstream (n)
24-
- Three previous minor versions. Each supported minor version also supports two stable patches.
51+
* The current minor version that is released in AKS (N)
52+
* Three previous minor versions. Each supported minor version also supports two stable patches.
53+
54+
This is known as "N-3" - (N (Latest release) - 3 (minor versions)).
55+
56+
For example, if AKS introduces *1.13.x* today, support is provided for the following versions:
57+
58+
New minor version Supported Version List
59+
----------------- ----------------------
60+
1.13.x 1.12.a, 1.12.b, 1.11.a, 1.11.b, 1.10.a, 1.10.b
61+
62+
Where "x" and ".a" and ".b" are representative patch versions.
63+
64+
For details on communications regarding version changes and expectations, see "Communications" below.
65+
66+
When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and
67+
removed. For example if the current supported version list is:
68+
69+
Supported Version List
70+
----------------------
71+
1.12.a, 1.12.b, 1.11.a, 1.11.b, 1.10.a, 1.10.b, 1.9.a, 1.9.b
72+
73+
And AKS releases 1.13.x, this means that the 1.9.x versions (all 1.9 versions) will be removed and out of support.
2574

26-
For example, if AKS introduces *1.13.x* today, support is also provided for *1.12.a* + *1.12.b*, *1.11.c* + *1.11d*, *1.10.e* + *1.10f* (where the lettered patch releases are two latest stable builds).
75+
> [!NOTE]
76+
> Please note, that if customers are running an unsupported Kubernetes version, they will be asked to upgrade when
77+
> requesting support for the cluster. Clusters running unsupported Kubernetes releases are not covered by the
78+
> [AKS support policies](https://docs.microsoft.com/azure/aks/support-policies).
2779
28-
When a new minor version is introduced, the oldest minor version and patch releases supported are retired. 30 days before the release of the new minor version and upcoming version retirement, an announcement is made through the [Azure update channels][azure-update-channel]. In the example above where *1.13.x* is released, the retired versions are *1.9.g* + *1.9.h*.
2980

30-
When you deploy an AKS cluster in the portal or with the Azure CLI, the cluster is always set to the n-1 minor version and latest patch. For example, if AKS supports *1.13.x*, *1.12.a* + *1.12.b*, *1.11.c* + *1.11d*, *1.10.e* + *1.10f*, the default version for new clusters is *1.11.b*.
81+
In addition to the above on minor versions, AKS supports the two latest *patch** releases of a given minor version. For
82+
example, given the following supported versions:
83+
84+
Supported Version List
85+
----------------------
86+
1.12.1, 1.12.2, 1.11.4, 1.11.5
87+
88+
If upstream Kubernetes released 1.12.3 and 1.11.6 and AKS releases those patch versions, the oldest patch versions
89+
are deprecated and removed, and the supported version list becomes:
90+
91+
Supported Version List
92+
----------------------
93+
1.12.*2*, 1.12.*3*, 1.11.*5*, 1.11.*6*
94+
95+
### Communications
96+
97+
* For new **minor** versions of Kubernetes
98+
* All users are notified of the new version and what version will be removed.
99+
* Customers running the version **to be removed** will be notified that they have **60 days** to upgrade to a
100+
supported release (e.g. minor version).
101+
* For new **patch** versions of Kubernetes
102+
* All users are notified of the new patch version being released and to upgrade to the latest patch release.
103+
* Users have **30 days** to upgrade to a newer, supported patch release. Users have **30 days** to upgrade to
104+
a supported patch release before the oldest is removed.
105+
106+
AKS defines "released" as general availability, enabled in all SLO / Quality of Service measurements and
107+
available in all regions.
108+
109+
> [!NOTE]
110+
> Customers are notified of Kubernetes version releases and deprecations, when a minor version is
111+
> deprecated/removed users are given 60 days to upgrade to a supported release. In the case of patch releases,
112+
> customers are given 30 days to upgrade to a supported release.
113+
114+
Notifications are sent via:
115+
116+
* [AKS Release notes](https://aka.ms/aks/releasenotes)
117+
* Azure portal Notifications
118+
* [Azure update channel][azure-update-channel]
119+
120+
### Policy Exceptions
121+
122+
AKS reserves the right to add or remove new/existing versions that have been identified to have one or more critical
123+
production impacting bugs or security issues without advance notice.
124+
125+
Specific patch releases may be skipped, or rollout accelerated depending on the severity of the bug or security issue.
126+
127+
### Azure portal and CLI default versions
128+
129+
When you deploy an AKS cluster in the portal or with the Azure CLI, the cluster is always set to the N-1 minor version
130+
and latest patch. For example, if AKS supports *1.13.x*, *1.12.a* + *1.12.b*, *1.11.a* + *1.11.b*, *1.10.a* + *1.10b*,
131+
the default version for new clusters is *1.12.b*.
132+
133+
AKS defaults to N-1 (minor.latestPatch, eg 1.12.b) to provide customers a known, stable and patched version by default.
31134

32135
## List currently supported versions
33136

34-
To find out what versions are currently available for your subscription and region, use the [az aks get-versions][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region:
137+
To find out what versions are currently available for your subscription and region, use the
138+
[az aks get-versions][az-aks-get-versions] command. The following example lists the available Kubernetes versions for
139+
the *EastUS* region:
35140

36141
```azurecli-interactive
37142
az aks get-versions --location eastus --output table
38143
```
39144

40-
The output is similar to the following example, which shows that Kubernetes version *1.13.5* is the most recent version available:
145+
The output is similar to the following example, which shows that Kubernetes version *1.13.5* is the most recent version
146+
available:
41147

42148
```
43149
KubernetesVersion Upgrades
@@ -55,20 +161,30 @@ KubernetesVersion Upgrades
55161

56162
**What happens when a customer upgrades a Kubernetes cluster with a minor version that is not supported?**
57163

58-
If you are on the *n-4* version, you are out of the SLO. If your upgrade from version n-4 to n-3 succeeds, then you are back in the SLO. For example:
164+
If you are on the *n-4* version, you are outside of support and will be asked to upgrade. If your upgrade from version
165+
n-4 to n-3 succeeds, you are now within our support policies. For example:
166+
167+
- If the supported AKS versions are *1.13.x*, *1.12.a* + *1.12.b*, *1.11.c* + *1.11d*, and *1.10.e* + *1.10f* and you
168+
are on *1.9.g* or *1.9.h*, you are outside of support.
169+
- If the upgrade from *1.9.g* or *1.9.h* to *1.10.e* or *1.10.f* succeeds, you are back in the within our support policies.
170+
171+
Upgrades to versions older than *n-4* are not supported. In such cases, we recommend customers create new AKS clusters
172+
and redeploy their workloads.
59173

60-
- If the supported AKS versions are *1.13.x*, *1.12.a* + *1.12.b*, *1.11.c* + *1.11d*, and *1.10.e* + *1.10f* and you are on *1.9.g* or *1.9.h*, you are out of the SLO.
61-
- If the upgrade from *1.9.g* or *1.9.h* to *1.10.e* or *1.10.f* succeeds, you are back in the SLO.
174+
**What does 'Out of Support' mean**
62175

63-
Upgrades to versions older than *n-4* are not supported. In such cases, we recommend customers create new AKS clusters and redeploy their workloads.
176+
'Outside of Support' means that the version you are running is outside of the supported versions list, and you will be
177+
asked to upgrade the cluster to a supported version when requesting support. Additionally, AKS does not make any
178+
runtime or other guarantees for clusters outside of the supported versions list.
64179

65180
**What happens when a customer scales a Kubernetes cluster with a minor version that is not supported?**
66181

67182
For minor versions not supported by AKS, scaling in or out continues to work without any issues.
68183

69184
**Can a customer stay on a Kubernetes version forever?**
70185

71-
Yes. However, if the cluster is not on one of the versions supported by AKS, the cluster is out of the AKS SLO. Azure does not automatically upgrade your cluster or delete it.
186+
Yes. However, if the cluster is not on one of the versions supported by AKS, the cluster is out of the AKS support
187+
policies. Azure does not automatically upgrade your cluster or delete it.
72188

73189
**What version does the master support if the agent cluster is not in one of the supported AKS versions?**
74190

articles/azure-cache-for-redis/cache-how-to-premium-vnet.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ There are eight inbound port range requirements. Inbound requests in these range
127127

128128
| Port(s) | Direction | Transport Protocol | Purpose | Local IP | Remote IP |
129129
| --- | --- | --- | --- | --- | --- |
130-
| 6379, 6380 |Inbound |TCP |Client communication to Redis, Azure load balancing | (Redis subnet) | (Redis subnet), Virtual Network, Azure Load Balancer |
130+
| 6379, 6380 |Inbound |TCP |Client communication to Redis, Azure load balancing | (Redis subnet) | (Redis subnet), Virtual Network, Azure Load Balancer <sup>2</sup> |
131131
| 8443 |Inbound |TCP |Internal communications for Redis | (Redis subnet) |(Redis subnet) |
132132
| 8500 |Inbound |TCP/UDP |Azure load balancing | (Redis subnet) |Azure Load Balancer |
133133
| 10221-10231 |Inbound |TCP |Internal communications for Redis | (Redis subnet) |(Redis subnet), Azure Load Balancer |
@@ -136,6 +136,8 @@ There are eight inbound port range requirements. Inbound requests in these range
136136
| 16001 |Inbound |TCP/UDP |Azure load balancing | (Redis subnet) |Azure Load Balancer |
137137
| 20226 |Inbound |TCP |Internal communications for Redis | (Redis subnet) |(Redis subnet) |
138138

139+
<sup>2</sup> You can use the Service Tag 'AzureLoadBalancer' (Resource Manager) (or 'AZURE_LOADBALANCER' for classic) for authoring the NSG rules.
140+
139141
#### Additional VNET network connectivity requirements
140142

141143
There are network connectivity requirements for Azure Cache for Redis that may not be initially met in a virtual network. Azure Cache for Redis requires all the following items to function properly when used within a virtual network.

articles/azure-databricks/howto-regional-disaster-recovery.md

Lines changed: 63 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -130,45 +130,84 @@ To create your own regional disaster recovery topology, follow these requirement
130130
Copy and save the following python script to a file, and run it in your Databricks command line. For example, `python scriptname.py`.
131131

132132
```python
133-
from subprocess import call, check_output import json
133+
from subprocess import call, check_output
134+
import json, os
134135

135136
EXPORT_PROFILE = "primary"
136137
IMPORT_PROFILE = "secondary"
137138

138-
# Get all clusters info from old workspace
139-
clusters_out = check_output(["databricks", "clusters", "list", "--profile", EXPORT_PROFILE]) clusters_info_list = clusters_out.splitlines()
139+
# Get all clusters info from old workspace
140+
clusters_out = check_output(["databricks", "clusters", "list", "--profile", EXPORT_PROFILE])
141+
clusters_info_list = clusters_out.splitlines()
142+
143+
# Create a list of all cluster ids
144+
clusters_list = []
145+
##for cluster_info in clusters_info_list: clusters_list.append(cluster_info.split(None, 1)[0])
140146

141-
# Create a list of all cluster ids
142-
clusters_list = [] for cluster_info in clusters_info_list: clusters_list.append(cluster_info.split(None, 1)[0])
147+
for cluster_info in clusters_info_list:
148+
if cluster_info != '':
149+
clusters_list.append(cluster_info.split(None, 1)[0])
143150

144151
# Optionally filter cluster ids out manually, so as to create only required ones in new workspace
145152

146-
# Create a list of mandatory / optional create request elements
147-
cluster_req_elems = ["num_workers","autoscale","cluster_name","spark_version","spark_conf"," node_type_id","driver_node_type_id","custom_tags","cluster_log_conf","sp ark_env_vars","autotermination_minutes","enable_elastic_disk"]
153+
# Create a list of mandatory / optional create request elements
154+
cluster_req_elems = ["num_workers","autoscale","cluster_name","spark_version","spark_conf","node_type_id","driver_node_type_id","custom_tags","cluster_log_conf","spark_env_vars","autotermination_minutes","enable_elastic_disk"]
155+
156+
print(str(len(clusters_list)) + " clusters found in the primary site" )
148157

158+
print ("---------------------------------------------------------")
149159
# Try creating all / selected clusters in new workspace with same config as in old one.
150-
cluster_old_new_mappings = {} for cluster in clusters_list: print "Trying to migrate cluster " + cluster
160+
cluster_old_new_mappings = {}
161+
i = 0
162+
for cluster in clusters_list:
163+
i += 1
164+
print("Checking cluster " + str(i) + "/" + str(len(clusters_list)) + " : " + cluster)
165+
cluster_get_out = check_output(["databricks", "clusters", "get", "--cluster-id", cluster, "--profile", EXPORT_PROFILE])
166+
print ("Got cluster config from old workspace")
167+
168+
# Remove extra content from the config, as we need to build create request with allowed elements only
169+
cluster_req_json = json.loads(cluster_get_out)
170+
cluster_json_keys = cluster_req_json.keys()
171+
172+
#Don't migrate Job clusters
173+
if cluster_req_json['cluster_source'] == u'JOB' :
174+
print ("Skipping this cluster as it is a Job cluster : " + cluster_req_json['cluster_id'] )
175+
print ("---------------------------------------------------------")
176+
continue
151177

152-
cluster_get_out = check_output(["databricks", "clusters", "get", "--cluster-id", cluster, "--profile", EXPORT_PROFILE])
153-
print "Got cluster config from old workspace"
178+
for key in cluster_json_keys:
179+
if key not in cluster_req_elems:
180+
cluster_req_json.pop(key, None)
154181

155-
# Remove extra content from the config, as we need to build create request with allowed elements only
156-
cluster_req_json = json.loads(cluster_get_out)
157-
cluster_json_keys = cluster_req_json.keys()
182+
# Create the cluster, and store the mapping from old to new cluster ids
158183

159-
for key in cluster_json_keys:
160-
if key not in cluster_req_elems:
161-
cluster_req_json.pop(key, None)
162-
163-
# Create the cluster, and store the mapping from old to new cluster ids
164-
cluster_create_out = check_output(["databricks", "clusters", "create", "--json", json.dumps(cluster_req_json), "--profile", IMPORT_PROFILE])
165-
cluster_create_out_json = json.loads(cluster_create_out)
166-
cluster_old_new_mappings[cluster] = cluster_create_out_json['cluster_id']
184+
#Create a temp file to store the current cluster info as JSON
185+
strCurrentClusterFile = "tmp_cluster_info.json"
167186

168-
print "Sent cluster create request to new workspace successfully"
187+
#delete the temp file if exists
188+
if os.path.exists(strCurrentClusterFile) :
189+
os.remove(strCurrentClusterFile)
169190

170-
print "Cluster mappings: " + json.dumps(cluster_old_new_mappings)
171-
print "All done"
191+
fClusterJSONtmp = open(strCurrentClusterFile,"w+")
192+
fClusterJSONtmp.write(json.dumps(cluster_req_json))
193+
fClusterJSONtmp.close()
194+
195+
#cluster_create_out = check_output(["databricks", "clusters", "create", "--json", json.dumps(cluster_req_json), "--profile", IMPORT_PROFILE])
196+
cluster_create_out = check_output(["databricks", "clusters", "create", "--json-file", strCurrentClusterFile , "--profile", IMPORT_PROFILE])
197+
cluster_create_out_json = json.loads(cluster_create_out)
198+
cluster_old_new_mappings[cluster] = cluster_create_out_json['cluster_id']
199+
200+
print ("Cluster create request sent to secondary site workspace successfully")
201+
print ("---------------------------------------------------------")
202+
203+
#delete the temp file if exists
204+
if os.path.exists(strCurrentClusterFile) :
205+
os.remove(strCurrentClusterFile)
206+
207+
print ("Cluster mappings: " + json.dumps(cluster_old_new_mappings))
208+
print ("All done")
209+
print ("P.S. : Please note that all the new clusters in your secondary site are being started now!")
210+
print (" If you won't use those new clusters at the moment, please don't forget terminating your new clusters to avoid charges")
172211
```
173212

174213
6. **Migrate the jobs configuration**

articles/iot-hub/iot-hub-node-node-schedule-jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ In this section, you create a Node.js console app that responds to a direct meth
101101
102102
// Respond the cloud app for the direct method
103103
response.send(200, function(err) {
104-
if (!err) {
104+
if (err) {
105105
console.error('An error occurred when sending a method response:\n' + err.toString());
106106
} else {
107107
console.log('Response to method \'' + request.methodName + '\' sent successfully.');

0 commit comments

Comments
 (0)