Skip to content

Commit ec88790

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into shgwpol
2 parents a10cb51 + 7ee23eb commit ec88790

35 files changed

+153
-46
lines changed

articles/aks/use-node-public-ips.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -260,7 +260,7 @@ az provider register --namespace Microsoft.ContainerService
260260

261261
Triggering host port auto assignment is done by deploying a workload without any host ports and applying the `kubernetes.azure.com/assign-hostports-for-containerports` annotation with the list of ports that need host port assignments. The value of the annotation should be specified as a comma-separated list of entries like `port/protocol`, where the port is an individual port number that is defined in the Pod spec and the protocol is `tcp` or `udp`.
262262

263-
Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned.
263+
Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned. The environment variable name will be in the following format (example below): `<deployment name>_PORT_<port number>_<protocol>_HOSTPORT`, so an example would be `mydeployment_PORT_8080_TCP_HOSTPORT: 41932`.
264264

265265
Here is an example `echoserver` deployment, showing the mapping of host ports for ports 8080 and 8443:
266266

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
title: Monitor AKS hybrid clusters
3+
ms.date: 01/10/2023
4+
ms.topic: article
5+
author: austonli
6+
ms.author: aul
7+
description: Collect metrics and logs of AKS hybrid clusters using Azure Monitor.
8+
ms.reviewer: aul
9+
---
10+
11+
# Azure Monitor container insights for Azure Kubernetes Service (AKS) hybrid clusters (preview)
12+
13+
>[!NOTE]
14+
>Support for monitoring AKS hybrid clusters is currently in preview. We recommend only using preview features in safe testing environments.
15+
16+
[Azure Monitor container insights](./container-insights-overview.md) provides a rich monitoring experience for [AKS hybrid clusters (preview)](/azure/aks/hybrid/aks-hybrid-options-overview). This article describes how to set up Container insights to monitor an AKS hybrid cluster.
17+
18+
## Supported configurations
19+
20+
- Azure Monitor container insights supports monitoring only Linux containers.
21+
22+
## Prerequisites
23+
24+
- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).
25+
- Log Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).
26+
- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
27+
- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace.
28+
- The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
29+
- Azure CLI version 2.43.0 or higher
30+
- Azure k8s-extension version 1.3.7 or higher
31+
- Azure Resource-graph version 2.1.0
32+
33+
## Onboarding
34+
35+
## [CLI](#tab/create-cli)
36+
37+
```acli
38+
az login
39+
40+
az account set --subscription <cluster-subscription-name>
41+
42+
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
43+
```
44+
## [Azure portal](#tab/create-portal)
45+
46+
### Onboarding from the AKS hybrid resource pane
47+
48+
1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor.
49+
50+
2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
51+
52+
3. On the onboarding page, select the 'Configure Azure Monitor' button
53+
54+
4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
55+
56+
5. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
57+
58+
### Onboarding from Azure Monitor pane
59+
60+
1. In the Azure portal, navigate to the 'Monitor' pane, and select the 'Containers' option under the 'Insights' menu.
61+
62+
2. Select the 'Unmonitored clusters' tab to view the AKS hybrid clusters that you can enable monitoring for.
63+
64+
3. Click on the 'Enable' link next to the cluster that you want to enable monitoring for.
65+
66+
4. Choose the Log Analytics workspace.
67+
68+
5. Select the 'Configure' button to continue.
69+
70+
71+
## [Resource Manager](#tab/create-arm)
72+
73+
1. Download the Azure Resource Manager Template and Parameter files
74+
75+
```bash
76+
curl -L https://aka.ms/existingClusterOnboarding.json -o existingClusterOnboarding.json
77+
```
78+
79+
```bash
80+
curl -L https://aka.ms/existingClusterParam.json -o existingClusterParam.json
81+
```
82+
83+
2. Edit the values in the parameter file.
84+
85+
- For clusterResourceId and clusterRegion, use the values on the Overview page for the LCM cluster
86+
- For workspaceResourceId, use the resource ID of your Log Analytics workspace
87+
- For workspaceRegion, use the Location of your Log Analytics workspace
88+
- For workspaceDomain, use the workspace domain value as “opinsights.azure.com” for public cloud and for Azure China cloud as “opinsights.azure.cn”
89+
- For resourceTagValues, leave as empty if not specific
90+
91+
3. Deploy the ARM template
92+
93+
```azurecli
94+
az login
95+
96+
az account set --subscription <cluster-subscription-name>
97+
98+
az deployment group create --resource-group <resource-group> --template-file ./existingClusterOnboarding.json --parameters existingClusterParam.json
99+
```
100+
---
101+
102+
## Validation
103+
104+
### Extension details
105+
106+
Showing the extension details:
107+
108+
```azcli
109+
az k8s-extension list --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice"
110+
```
111+
112+
113+
## Delete extension
114+
115+
The command for deleting the extension:
116+
117+
```azcli
118+
az k8s-extension delete --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --name azuremonitor-containers --yes
119+
```
120+
121+
## Known Issues/Limitations
122+
123+
- Windows containers are not supported currently

articles/azure-monitor/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -966,6 +966,8 @@ items:
966966
href: containers/container-insights-enable-aks.md
967967
- name: Azure Arc-enabled cluster
968968
href: containers/container-insights-enable-arc-enabled-clusters.md
969+
- name: AKS hybrid cluster
970+
href: containers/container-insights-enable-provisioned-clusters.md
969971
- name: Hybrid cluster
970972
href: containers/container-insights-hybrid-setup.md
971973
- name: Enable with Azure Policy

articles/azure-netapp-files/azacsnap-cmd-ref-delete.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
1212
ms.workload: storage
1313
ms.tgt_pltfrm: na
1414
ms.topic: reference
15-
ms.date: 04/21/2021
15+
ms.date: 01/18/2023
1616
ms.author: phjensen
1717
---
1818

@@ -37,7 +37,7 @@ The `-c delete` command has the following options:
3737

3838
- `--delete sync` when used with options `--dbsid <SID>` and `--hanabackupid <HANA backup id>` gets the storage snapshot name from the backup catalog for the `<HANA backup id>`, and then deletes the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
3939

40-
- `--delete sync` when used with `--snapshot <snapshot name>` will check for any entries in the backup catalog for the `<snapshot name>`, gets the SAP HANA backup ID and deletes both the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
40+
- `--delete sync` when used with options `--dbsid <SID>` and `--snapshot <snapshot name>` will check for any entries in the backup catalog for the `<snapshot name>`, gets the SAP HANA backup ID and deletes both the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
4141

4242
- `[--force]` (optional) *Use with caution*. This operation will force deletion without prompting for confirmation.
4343

articles/cognitive-services/QnAMaker/How-To/multi-turn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -362,4 +362,4 @@ QnA Maker supports version control by including multi-turn conversation steps in
362362

363363
## Next steps
364364

365-
* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
365+
* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/archive/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).

articles/cognitive-services/Speech-Service/custom-speech-overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,9 @@ Out of the box, speech to text utilizes a Universal Language Model as a base mod
2121

2222
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
2323

24+
> [!NOTE]
25+
> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
26+
2427
## How does it work?
2528

2629
With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.

articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ zone_pivot_groups: speech-studio-cli-rest
1717

1818
In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
1919

20+
> [!NOTE]
21+
> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
22+
2023
You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
2124

2225
> [!NOTE]

articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ zone_pivot_groups: speech-studio-cli-rest
1919
In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released.
2020

2121
> [!NOTE]
22-
> You pay to use Custom Speech models, but you are not charged for training a model.
22+
> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
2323
2424
Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
2525

articles/external-attack-surface-management/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,8 @@ For security purposes, Microsoft collects users' IP addresses when they log in.
5353

5454
In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region. Defender EASM processes customer data. By default, customer data is replicated to the paired region.
5555

56-
The Microsoft compliance framework requires that all customer data be deleted within 180 days in accordance with [Azure subscription states](https://learn.microsoft.com/azure/cost-management-billing/manage/subscription-states) handling.  This also includes storage of customer data in offline locations, such as database backups. 
56+
The Microsoft compliance framework requires that all customer data be deleted within 180 days of that organization no longer being a customer of Microsoft.  This also includes storage of customer data in offline locations, such as database backups. Once a resource is deleted, it cannot be restored by our teams.  The customer data will be retained in our data stores for 75 days, however the actual resource cannot be restored.  After the 75 day period, customer data will be permanently deleted.  
57+
5758

5859
## Next Steps
5960

articles/lab-services/class-type-jupyter-notebook.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
title: Set up a lab to teach data science with Python and Jupyter Notebooks | Microsoft Docs
33
description: Learn how to set up a lab to teach data science using Python and Jupyter Notebooks.
4-
author: emaher
54
ms.topic: how-to
65
ms.date: 01/04/2022
76
ms.service: lab-services

0 commit comments

Comments
 (0)