Skip to content

Commit b6b838f

Browse files
authored
Merge pull request #210319 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 2a79647 + a985b82 commit b6b838f

File tree

8 files changed

+10
-17
lines changed

8 files changed

+10
-17
lines changed

articles/azure-monitor/app/availability-azure-functions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This article will cover how to create an Azure Function with TrackAvailability()
1616
1717
## Create a timer trigger function
1818

19-
1. Create a Azure Functions resource.
19+
1. Create an Azure Functions resource.
2020
- If you already have an Application Insights Resource:
2121
- By default Azure Functions creates an Application Insights resource but if you would like to use one of your already created resources you will need to specify that during creation.
2222
- Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
@@ -186,4 +186,4 @@ You can use Logs(analytics) to view you availability results, dependencies, and
186186
## Next steps
187187

188188
- [Application Map](./app-map.md)
189-
- [Transaction diagnostics](./transaction-diagnostics.md)
189+
- [Transaction diagnostics](./transaction-diagnostics.md)

articles/azure-monitor/containers/container-insights-agent-config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ The following table describes the settings you can configure to control data col
3131
| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs will not be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
3232
| `[log_collection_settings.stderr] enabled =` | Boolean | true or false | This controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in ConfigMaps, the default value is<br> `enabled = true`. |
3333
| `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs will not be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
34-
| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). |
34+
| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). It’s strongly recommended to secure log analytics workspace access with the default [log_collection_settings.env_var] enabled = true. If sensitive data is stored in environment variables, it is mandatory and very critical to secure log analytics workspace. |
3535
| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | true or false | This setting controls container log enrichment to populate the Name and Image property values<br> for every log record written to the ContainerLog table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in ConfigMap. |
3636
| `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | true or false | This setting allows the collection of Kube events of all types.<br> By default the Kube events with type *Normal* are not collected. When this setting is set to `true`, the *Normal* events are no longer filtered and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap |
3737

articles/container-registry/container-registry-helm-repos.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -295,15 +295,15 @@ If you previously set up your Azure container registry as a chart repository usi
295295
> * After you complete migration from a Helm 2-style (index.yaml-based) chart repository to OCI artifact repositories, use the Helm CLI and `az acr repository` commands to manage the charts. See previous sections in this article.
296296
> * The Helm OCI artifact repositories are not discoverable using Helm commands such as `helm search` and `helm repo list`. For more information about Helm commands used to store charts as OCI artifacts, see the [Helm documentation](https://helm.sh/docs/topics/registries/).
297297
298-
### Enable OCI support
298+
### Enable OCI support (enabled by default in Helm v3.8.0)
299299

300300
Ensure that you are using the Helm 3 client:
301301

302302
```console
303303
helm version
304304
```
305305

306-
Enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.
306+
If you are using Helm v3.8.0 or higher, this is enabled by default. If you are using a lower version, you can enable OCI support setting the environment variable:
307307

308308
```console
309309
export HELM_EXPERIMENTAL_OCI=1

articles/container-registry/container-registry-tutorial-sign-build-push.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ In this tutorial:
5656
5757
# Download the plugin
5858
curl -Lo notation-azure-kv.tar.gz \
59-
https://github.com/Azure/notation-azure-kv/releases/download/v0.3.0-alpha.1/notation-azure-kv_0.3.0-alpha.1_Linux_amd64.tar.gz
59+
https://github.com/Azure/notation-azure-kv/releases/download/v0.3.1-alpha.1/notation-azure-kv_0.3.1-alpha.1_Linux_amd64.tar.gz
6060
6161
# Extract to the plugin directory
6262
tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv
@@ -248,4 +248,4 @@ notation verify $IMAGE
248248
249249
## Next steps
250250
251-
[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
251+
[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)

articles/event-grid/event-schema-resource-groups.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ The following example shows the schema for a **ResourceActionSuccess** event. Th
353353
```json
354354
[{
355355
"subject": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.EventHub/namespaces/{namespace}/AuthorizationRules/RootManageSharedAccessKey",
356-
"source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}"
356+
"source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}",
357357
"type": "Microsoft.Resources.ResourceActionSuccess",
358358
"time": "2018-10-08T22:46:22.6022559Z",
359359
"id": "{ID}",

articles/iot-edge/development-environment.md

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,6 @@ Only the IoT Edge runtime is supported for production deployments, but the follo
8181
| ---- | ------------- | ------------------- | --------- |
8282
| IoT EdgeHub dev tool | iotedgehubdev | Windows, Linux, macOS | Simulating a device to debug modules. |
8383
| IoT Edge dev container | iotedgedev | Windows, Linux, macOS | Developing without installing dependencies. |
84-
| IoT Edge runtime in a container | iotedgec | Windows, Linux, macOS, ARM | Testing on a device that may not support the runtime. |
8584

8685
### IoT EdgeHub dev tool
8786

@@ -97,12 +96,6 @@ The Azure IoT Edge dev container is a Docker container that has all the dependen
9796

9897
For more information, see [Azure IoT Edge dev container](https://github.com/Azure/iotedgedev/wiki/quickstart-with-iot-edge-dev-container).
9998

100-
### IoT Edge device container
101-
102-
The IoT Edge device container is a complete IoT Edge device, ready to be launched on any machine with a container engine. The device container includes the IoT Edge runtime and a container engine itself. Each instance of the container is a fully functional self-provisioning IoT Edge device. The device container supports remote debugging of modules, as long as there is a network route to the module. The device container is good for quickly creating large numbers of IoT Edge devices to test at-scale scenarios or Azure Pipelines. It also supports deployment to kubernetes via helm.
103-
104-
For more information, see [Azure IoT Edge device container](https://github.com/toolboc/azure-iot-edge-device-container).
105-
10699
## DevOps tools
107100

108101
When you're ready to develop at-scale solutions for extensive production scenarios, take advantage of modern DevOps principles including automation, monitoring, and streamlined software engineering processes. IoT Edge has extensions to support DevOps tools including Azure DevOps, Azure DevOps Projects, and Jenkins. If you want to customize an existing pipeline or use a different DevOps tool like CircleCI or TravisCI, you can do so with the CLI features included in the IoT Edge dev tool.

articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ AzureML inference router handles autoscaling for all model deployments on the Ku
5454
> [!IMPORTANT]
5555
> * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by AzureML, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration.
5656
>
57-
> * **Azureml-fe does not scale the nuzmber of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
57+
> * **Azureml-fe does not scale the number of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
5858
5959
Autoscaling can be controlled by `scale_settings` property in deployment YAML. The following example demonstrates how to enable autoscaling:
6060

articles/site-recovery/azure-to-azure-about-networking.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.author: v-pgaddala
1414

1515

1616

17-
This article provides networking guidance when you're replicating and recovering Azure VMs from one region to another, using [Azure Site Recovery](site-recovery-overview.md).
17+
This article provides networking guidance for platform connectivity when you're replicating Azure VMs from one region to another, using [Azure Site Recovery](site-recovery-overview.md).
1818

1919
## Before you start
2020

0 commit comments

Comments
 (0)