Skip to content

Commit 48c10dc

Browse files
authored
Merge pull request #101598 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 9c0a79a + 126195d commit 48c10dc

11 files changed

+22
-18
lines changed

articles/aks/operator-best-practices-advanced-scheduler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.author: mlearned
1212

1313
# Best practices for advanced scheduler features in Azure Kubernetes Service (AKS)
1414

15-
As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides advanced features that let you control which pods can be scheduled on certain nodes, or how multi-pod applications can appropriately distributed across the cluster.
15+
As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides advanced features that let you control which pods can be scheduled on certain nodes, or how multi-pod applications can be appropriately distributed across the cluster.
1616

1717
This best practices article focuses on advanced Kubernetes scheduling features for cluster operators. In this article, you learn how to:
1818

articles/aks/operator-best-practices-cluster-isolation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ A common approach to cluster isolation is to use physically separate AKS cluster
5050

5151
![Physical isolation of individual Kubernetes clusters in AKS](media/operator-best-practices-cluster-isolation/physical-isolation.png)
5252

53-
Physically separate clusters usually have a low pod density. As each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods is scheduled on those nodes. Unused capacity on the nodes can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
53+
Physically separate clusters usually have a low pod density. As each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods are scheduled on those nodes. Unused capacity on the nodes can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
5454

5555
## Next steps
5656

articles/aks/operator-best-practices-scheduler.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ The involuntary disruptions can be mitigated by using multiple replicas of your
7676

7777
If a cluster is to be upgraded or a deployment template updated, the Kubernetes scheduler makes sure additional pods are scheduled on other nodes before the voluntary disruption events can continue. The scheduler waits before a node is rebooted until the defined number of pods are successfully scheduled on other nodes in the cluster.
7878

79-
Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set as assigned the label `app: nginx-frontend`. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a *PodDisruptionBudget* object defines these requirements:
79+
Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set are assigned the label `app: nginx-frontend`. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a *PodDisruptionBudget* object defines these requirements:
8080

8181
```yaml
8282
apiVersion: policy/v1beta1

articles/azure-functions/functions-monitoring.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ For a function app to send data to Application Insights, it needs to know the in
2727

2828
### New function app in the portal
2929

30-
When you [create your function app in the Azure portal](functions-create-first-azure-function.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in nearest region.
30+
When you [create your function app in the Azure portal](functions-create-first-azure-function.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
3131

3232
To review the Application Insights resource being created, select it to expand the **Application Insights** window. You can change the **New resource name** or choose a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you want to store your data.
3333

@@ -70,7 +70,7 @@ You can see that both pages have a **Run in Application Insights** link to the A
7070

7171
![Run in Application Insights](media/functions-monitoring/run-in-ai.png)
7272

73-
The following query is displayed. You can see that the invocation list is limited to the last 30 days. The list shows no more than 20 rows (`where timestamp > ago(30d) | take 20`). The invocation details list is for the last 30 days with no limit.
73+
The following query is displayed. You can see that the query results are limited to the last 30 days (`where timestamp > ago(30d)`). In addition, the results show no more than 20 rows (`take 20`). In contrast, the invocation details list for your function is for the last 30 days with no limit.
7474

7575
![Application Insights Analytics invocation list](media/functions-monitoring/ai-analytics-invocation-list.png)
7676

@@ -94,7 +94,7 @@ The following areas of Application Insights can be helpful when evaluating the b
9494
| **[Performance](../azure-monitor/app/performance-counters.md)** | Analyze performance issues. |
9595
| **Servers** | View resource utilization and throughput per server. This data can be useful for debugging scenarios where functions are bogging down your underlying resources. Servers are referred to as **Cloud role instances**. |
9696
| **[Metrics](../azure-monitor/app/metrics-explorer.md)** | Create charts and alerts that are based on metrics. Metrics include the number of function invocations, execution time, and success rates. |
97-
| **[Live Metrics Stream](../azure-monitor/app/live-stream.md)** | View metrics data as it's created in real time. |
97+
| **[Live Metrics Stream](../azure-monitor/app/live-stream.md)** | View metrics data as it's created in near real-time. |
9898

9999
## Query telemetry data
100100

@@ -333,7 +333,7 @@ You can write logs in your function code that appear as traces in Application In
333333

334334
Use an [ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.ilogger) parameter in your functions instead of a `TraceWriter` parameter. Logs created by using `TraceWriter` go to Application Insights, but `ILogger` lets you do [structured logging](https://softwareengineering.stackexchange.com/questions/312197/benefits-of-structured-logging-vs-basic-logging).
335335

336-
With an `ILogger` object, you call `Log<level>` [extension methods on ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.loggerextensions#methods) to create logs. The following code writes `Information` logs with category "Function."
336+
With an `ILogger` object, you call `Log<level>` [extension methods on ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.loggerextensions#methods) to create logs. The following code writes `Information` logs with category "Function.<YOUR_FUNCTION_NAME>.User."
337337

338338
```cs
339339
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogger logger)
@@ -557,7 +557,7 @@ namespace functionapp0915
557557

558558
Don't call `TrackRequest` or `StartOperation<RequestTelemetry>` because you'll see duplicate requests for a function invocation. The Functions runtime automatically tracks requests.
559559

560-
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
560+
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`, `TrackMetric()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
561561

562562
## Log custom telemetry in JavaScript functions
563563

@@ -586,7 +586,7 @@ The `tagOverrides` parameter sets the `operation_Id` to the function's invocatio
586586

587587
## Dependencies
588588

589-
Functions v2 automatically collects dependencies for HTTP requests, ServiceBus, and SQL.
589+
Functions v2 automatically collects dependencies for HTTP requests, ServiceBus, EventHub, and SQL.
590590

591591
You can write custom code to show the dependencies. For examples, see the sample code in the [C# custom telemetry section](#log-custom-telemetry-in-c-functions). The sample code results in an *application map* in Application Insights that looks like the following image:
592592

@@ -598,13 +598,13 @@ To report an issue with Application Insights integration in Functions, or to mak
598598
599599
## Streaming Logs
600600

601-
While developing an application, you often want to see what's being written to the logs in near-real time when running in Azure.
601+
While developing an application, you often want to see what's being written to the logs in near real-time when running in Azure.
602602

603603
There are two ways to view a stream of log files being generated by your function executions.
604604

605605
* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan.
606606

607-
* **Live Metrics Stream**: when your function app is [connected to Application Insights](#enable-application-insights-integration), you can view log data and other metrics in near-real time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses [sampled data](#configure-sampling).
607+
* **Live Metrics Stream**: when your function app is [connected to Application Insights](#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses [sampled data](#configure-sampling).
608608

609609
Log streams can be viewed both in the portal and in most local development environments.
610610

articles/azure-functions/functions-versions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ In Visual Studio, you select the runtime version when you create a project. Azur
139139
```
140140

141141
> [!NOTE]
142-
> Azure Functions 3.x and .NET requires the `Microsoft.Sdk.NET.Functions` extension be at least `3.0.0`.
142+
> Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
143143
144144
###### Updating 2.x apps to 3.x in Visual Studio
145145

articles/azure-functions/set-runtime-version.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.date: 11/26/2018
88

99
# How to target Azure Functions runtime versions
1010

11-
A function app runs on a specific version of the Azure Functions runtime. There are two major versions: [1.x and 2.x](functions-versions.md), with version 3.x in preview. By default, function apps that are created version 2.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
11+
A function app runs on a specific version of the Azure Functions runtime. There are three major versions: [1.x, 2.x, and 3.x](functions-versions.md). By default, function apps are created in version 2.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
1212

1313
## Automatic and manual version updates
1414

articles/cost-management-billing/manage/understand-vm-reservation-charges.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ When you shut down a resource, the reservation discount automatically applies to
4141

4242
![Screenshot of one applied reservation and two matching VM instances](./media/understand-vm-reservation-charges/billing-reserved-vm-instance-application.png)
4343

44-
1. Any usage that's above the reservation line gets charged at the regular pay-as-you-go rates. You're not charge for any usage below the reservations line, since it has been already paid as part of reservation purchase.
44+
1. Any usage that's above the reservation line gets charged at the regular pay-as-you-go rates. You're not charged for any usage below the reservations line, since it has been already paid as part of reservation purchase.
4545
2. In hour 1, instance 1 runs for 0.75 hours and instance 2 runs for 0.5 hours. Total usage for hour 1 is 1.25 hours. You're charged the pay-as-you-go rates for the remaining 0.25 hours.
4646
3. For hour 2 and hour 3, both instances ran for 1 hour each. One instance is covered by the reservation and the other is charged at pay-as-you-go rates.
4747
4. For hour 4, instance 1 runs for 0.5 hours and instance 2 runs for 1 hour. Instance 1 is fully covered by the reservation and 0.5 hours of instance 2 is covered. You’re charged the pay-as-you-go rate for the remaining 0.5 hours.

articles/data-explorer/data-factory-template.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ ms.date: 09/08/2019
1616

1717
Azure Data Explorer is a fast, fully managed, data-analytics service. It offers real-time analysis on large volumes of data that stream from many sources, such as applications, websites, and IoT devices.
1818

19-
Azure Data Factory is a fully managed, cloud-based, data-integration service. You can use it to populate your Azure Data Explorer database with data from your existing system. And it can help you save time when you're building analytics solutions.
19+
To copy data from a database in Oracle Server, Netezza, Teradata, or SQL Server to Azure Data Explorer, you have to load huge amounts of data from multiple tables. Usually, the data has to be partitioned in each table so that you can load rows with multiple threads in parallel from a single table. This article describes a template to use in these scenarios.
2020

2121
[Azure Data Factory templates](/azure/data-factory/solution-templates-introduction) are predefined Data Factory pipelines. These templates can help you get started quickly with Data Factory and reduce development time on data integration projects.
2222

articles/machine-learning/how-to-manage-workspace-cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ To create a workspace that uses existing resources, you must provide the ID for
157157

158158
`"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/microsoft.insights/components/<application-insight-name>"`
159159

160-
+ **Azure Key Vault**: `az keyvault show --name <key-vault-name> --query "ID"
160+
+ **Azure Key Vault**: `az keyvault show --name <key-vault-name> --query "ID"`
161161

162162
The response from this command is similar to the following text, and is the ID for your key vault:
163163

articles/openshift/howto-aad-app-configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,8 +86,8 @@ For details on creating a new Azure AD application, see [Register an app with th
8686
## Add API permissions
8787

8888
1. In the **Manage** section click **API permissions**.
89-
2. Click **Add permission** and select **Azure Active Directory Graph** then **Delegated permissions**
90-
3. Expand **User** on the list below and make sure **User.Read** is enabled.
89+
2. Click **Add permission** and select **Azure Active Directory Graph** then **Delegated permissions**.
90+
3. Expand **User** on the list below and enable the **User.Read** permission. If **User.Read** is enabled by default, ensure that it is the **Azure Active Directory Graph** permission **User.Read**, *not* the **Microsoft Graph** permission **User.Read**.
9191
4. Scroll up and select **Application permissions**.
9292
5. Expand **Directory** on the list below and enable **Directory.ReadAll**
9393
6. Click **Add permissions** to accept the changes.

0 commit comments

Comments
 (0)