Skip to content

Commit b2d29f9

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into cdnfreshness3
2 parents 64ead59 + f4a88e8 commit b2d29f9

File tree

112 files changed

+1787
-843
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+1787
-843
lines changed

articles/active-directory/governance/lifecycle-workflow-history.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,13 @@ Workflows created using Lifecycle Workflows allow for the automation of lifecycl
2121

2222
## Lifecycle Workflow History Summaries
2323

24-
Lifecycle Workflows introduce a history feature based on summaries and details. These history summaries allow you to quickly get information about for who a workflow ran, and whether or not this run was successful or not. This is valuable because the large set of information given by audit logs might become too numerous to be efficiently used. To make a large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways:
24+
Lifecycle Workflows introduce a history feature based on summaries and details. These history summaries allow you to quickly get information about for who a workflow ran, and whether or not this run was successful. This is valuable because the large set of information given by audit logs might become too numerous to be efficiently used. To make a large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways:
2525

26-
- **Users summary**: Shows a summary of users processed by a workflow, and which tasks failed, successfully, and totally ran for each specific user.
26+
- **Users summary**: Shows a summary of users processed by a workflow. Successfully, failed, and total ran information for each specific user is shown.
2727
- **Runs summary**: Shows a summary of workflow runs in terms of the workflow. Successful, failed, and total task information when workflow runs are noted.
2828
- **Tasks summary**: Shows a summary of tasks processed by a workflow, and which tasks failed, successfully, and totally ran in the workflow.
2929

30-
Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md)
30+
Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md).
3131

3232
## Users Summary information
3333

@@ -113,7 +113,7 @@ Task detailed history information allows you to filter for specific information
113113
- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran.
114114
- **Tasks**: You can filter based on specific task names.
115115

116-
Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters](lifecycle-workflow-tasks.md#common-task-parameters-preview).
116+
Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters (preview)](lifecycle-workflow-tasks.md#common-task-parameters-preview).
117117

118118
## Next steps
119119

@@ -123,4 +123,3 @@ Separating processing of the workflow from the tasks is important because, in a
123123
- [taskProcessingResult resource type](/graph/api/resources/identitygovernance-taskprocessingresult?view=graph-rest-beta&preserve-view=true)
124124
- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md)
125125
- [Lifecycle Workflow templates](lifecycle-workflow-templates.md)
126-

articles/aks/command-invoke.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.date: 1/14/2022
77

88
# Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster
99

10-
Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
10+
Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` actions.
1111

1212
## Prerequisites
1313

articles/aks/concepts-security.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ You can control access to the API server using Kubernetes role-based access cont
4242
## Node security
4343

4444
AKS nodes are Azure virtual machines (VMs) that you manage and maintain.
45-
* Linux nodes run optimized versions of Ubuntu or CBL-Mariner.
45+
* Linux nodes run optimized versions of Ubuntu or Mariner.
4646
* Windows Server nodes run an optimized Windows Server 2019 release using the `containerd` or Docker container runtime.
4747

4848
When an AKS cluster is created or scaled up, the nodes are automatically deployed with the latest OS security updates and configurations.

articles/aks/howto-deploy-java-liberty-app.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
3030

3131
* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
3232
* If running the commands in this guide locally (instead of Azure Cloud Shell):
33-
* Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, CBL-Mariner, macOS, Windows Subsystem for Linux).
33+
* Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Mariner, macOS, Windows Subsystem for Linux).
3434
* Install a Java SE implementation (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
3535
* Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
3636
* Install [Docker](https://docs.docker.com/get-docker/) for your OS.

articles/aks/node-updates-kured.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ You need the Azure CLI version 2.0.59 or later installed and configured. Run `az
2626

2727
## Understand the AKS node update experience
2828

29-
In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or CBL-Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
29+
In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu or Mariner image, with the OS configured to automatically check for updates every day. If security or kernel updates are available, they are automatically downloaded and installed.
3030

3131
![AKS node update and reboot process with kured](media/node-updates-kured/node-reboot-process.png)
3232

articles/aks/workload-identity-migrate-from-pod-identity.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ If your application is using managed identity and still relies on IMDS to get an
120120
To update or deploy the workload, add these pod annotations only if you want to use the migration sidecar. You inject the following [annotation][pod-annotations] values to use the sidecar in your pod specification:
121121

122122
* `azure.workload.identity/inject-proxy-sidecar` - value is `true` or `false`
123-
* `azure.workload.identity/proxy-sidecar-port` - value is the desired port for the proxy sidecar. The default value is `8080`.
123+
* `azure.workload.identity/proxy-sidecar-port` - value is the desired port for the proxy sidecar. The default value is `8000`.
124124

125125
When a pod with the above annotations is created, the Azure Workload Identity mutating webhook automatically injects the init-container and proxy sidecar to the pod spec.
126126

@@ -148,7 +148,7 @@ spec:
148148
runAsUser: 0
149149
env:
150150
- name: PROXY_PORT
151-
value: "8080"
151+
value: "8000"
152152
containers:
153153
- name: nginx
154154
image: nginx:alpine
@@ -157,7 +157,7 @@ spec:
157157
- name: proxy
158158
image: mcr.microsoft.com/oss/azure/workload-identity/proxy:v0.13.0
159159
ports:
160-
- containerPort: 8080
160+
- containerPort: 8000
161161
```
162162
163163
This configuration applies to any configuration where a pod is being created. After updating or deploying your application, you can verify the pod is in a running state using the [kubectl describe pod][kubectl-describe] command. Replace the value `podName` with the image name of your deployed pod.
@@ -210,4 +210,4 @@ This article showed you how to set up your pod to authenticate using a workload
210210

211211
<!-- EXTERNAL LINKS -->
212212
[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
213-
[kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
213+
[kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs

articles/azure-functions/functions-bindings-azure-sql-output.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -845,7 +845,7 @@ def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow]) -> func.HttpRe
845845

846846
try:
847847
req_body = req.get_json()
848-
rows = list(map(lambda r: json.loads(r.to_json()), req_body))
848+
rows = func.SqlRowList(map(lambda r: func.SqlRow.from_dict(r), req_body))
849849
except ValueError:
850850
pass
851851

@@ -926,7 +926,7 @@ def main(req: func.HttpRequest, todoItems: func.Out[func.SqlRow], requestLog: fu
926926

927927
try:
928928
req_body = req.get_json()
929-
rows = list(map(lambda r: json.loads(r.to_json()), req_body))
929+
rows = func.SqlRowList(map(lambda r: func.SqlRow.from_dict(r), req_body))
930930
except ValueError:
931931
pass
932932

articles/azure-monitor/app/app-map.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -245,8 +245,8 @@ For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
245245
```
246246

247247
Alternatively, *cloud role instance* can be helpful for scenarios where a cloud role name tells you the problem is somewhere in your web front end. But you might be running multiple load-balanced servers across your web front end. Being able to drill in a layer deeper via Kusto queries and knowing if the issue is affecting all web front-end servers or instances or just one can be important.
248-
intelligent view
249-
A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue.
248+
249+
Intelligent view A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue.
250250

251251
For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
252252

articles/azure-monitor/app/asp-net-dependencies.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,10 @@ dependencies
220220

221221
In the Log Analytics query view, `timestamp` represents the moment the TrackDependency() call was initiated, which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
222222

223+
### Does dependency tracking in Application Insights include logging response bodies?
224+
225+
Dependency tracking in Application Insights does not include logging response bodies as it would generate too much telemetry for most applications.
226+
223227
## Open-source SDK
224228

225229
Like every Application Insights SDK, the dependency collection module is also open source. Read and contribute to the code or report issues at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet).
@@ -273,4 +277,4 @@ A list of the latest [currently supported modules](https://github.com/microsoft/
273277
* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md).
274278
* [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency)
275279
* See [data model](./data-model.md) for Application Insights types and data model.
276-
* Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
280+
* Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.

articles/azure-monitor/app/correlation.md

Lines changed: 43 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,21 @@ For a reference, you can find the OpenCensus data model on [this GitHub page](ht
159159
160160
OpenCensus Python correlates W3C Trace-Context headers from incoming requests to the spans that are generated from the requests themselves. OpenCensus will correlate automatically with integrations for these popular web application frameworks: Flask, Django, and Pyramid. You just need to populate the W3C Trace-Context headers with the [correct format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) and send them with the request.
161161
162-
**Sample Flask application**
162+
Explore this sample Flask application. Install Flask, OpenCensus, and the extensions for Flask and Azure.
163+
164+
```shell
165+
166+
pip install flask opencensus opencensus-ext-flask opencensus-ext-azure
167+
168+
```
169+
170+
You will need to add your Application Insights connection string to the environment variable.
171+
172+
```shell
173+
APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string>
174+
```
175+
176+
**Sample Flask Application**
163177
164178
```python
165179
from flask import Flask
@@ -170,7 +184,9 @@ from opencensus.trace.samplers import ProbabilitySampler
170184
app = Flask(__name__)
171185
middleware = FlaskMiddleware(
172186
app,
173-
exporter=AzureExporter(),
187+
exporter=AzureExporter(
188+
connection_string='<appinsights-connection-string>', # or set environment variable APPLICATION_INSIGHTS_CONNECTION_STRING
189+
),
174190
sampler=ProbabilitySampler(rate=1.0),
175191
)
176192
@@ -248,49 +264,59 @@ You can export the log data by using `AzureLogHandler`. For more information, se
248264
249265
We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
250266
267+
251268
```python
252269
# module1.py
253270
import logging
254271
255272
from opencensus.trace import config_integration
256273
from opencensus.trace.samplers import AlwaysOnSampler
257274
from opencensus.trace.tracer import Tracer
258-
from module2 import function_1
275+
from module_2 import function_1
259276
260-
config_integration.trace_integrations(['logging'])
261-
logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
277+
config_integration.trace_integrations(["logging"])
278+
logging.basicConfig(
279+
format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
280+
)
262281
tracer = Tracer(sampler=AlwaysOnSampler())
263282
264283
logger = logging.getLogger(__name__)
265-
logger.warning('Before the span')
266-
with tracer.span(name='hello'):
267-
logger.warning('In the span')
268-
function_1(tracer)
269-
logger.warning('After the span')
284+
logger.warning("Before the span")
270285
271-
# module2.py
286+
with tracer.span(name="hello"):
287+
logger.warning("In the span")
288+
function_1(logger, tracer)
289+
logger.warning("After the span")
290+
```
272291
292+
```python
293+
# module_2.py
273294
import logging
274295
275296
from opencensus.trace import config_integration
276297
from opencensus.trace.samplers import AlwaysOnSampler
277298
from opencensus.trace.tracer import Tracer
278299
279-
config_integration.trace_integrations(['logging'])
280-
logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
300+
config_integration.trace_integrations(["logging"])
301+
logging.basicConfig(
302+
format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
303+
)
304+
logger = logging.getLogger(__name__)
281305
tracer = Tracer(sampler=AlwaysOnSampler())
282306
283-
def function_1(parent_tracer=None):
307+
308+
def function_1(logger=logger, parent_tracer=None):
284309
if parent_tracer is not None:
285310
tracer = Tracer(
286-
span_context=parent_tracer.span_context,
287-
sampler=AlwaysOnSampler(),
288-
)
311+
span_context=parent_tracer.span_context,
312+
sampler=AlwaysOnSampler(),
313+
)
289314
else:
290315
tracer = Tracer(sampler=AlwaysOnSampler())
291316
292317
with tracer.span("function_1"):
293318
logger.info("In function_1")
319+
294320
```
295321
296322
## Telemetry correlation in .NET

0 commit comments

Comments
 (0)