You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/operator-best-practices-advanced-scheduler.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.author: mlearned
12
12
13
13
# Best practices for advanced scheduler features in Azure Kubernetes Service (AKS)
14
14
15
-
As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides advanced features that let you control which pods can be scheduled on certain nodes, or how multi-pod applications can appropriately distributed across the cluster.
15
+
As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler provides advanced features that let you control which pods can be scheduled on certain nodes, or how multi-pod applications can be appropriately distributed across the cluster.
16
16
17
17
This best practices article focuses on advanced Kubernetes scheduling features for cluster operators. In this article, you learn how to:
Copy file name to clipboardExpand all lines: articles/aks/operator-best-practices-cluster-isolation.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ A common approach to cluster isolation is to use physically separate AKS cluster
50
50
51
51

52
52
53
-
Physically separate clusters usually have a low pod density. As each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods is scheduled on those nodes. Unused capacity on the nodes can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
53
+
Physically separate clusters usually have a low pod density. As each team or workload has their own AKS cluster, the cluster is often over-provisioned with compute resources. Often, a small number of pods are scheduled on those nodes. Unused capacity on the nodes can't be used for applications or services in development by other teams. These excess resources contribute to the additional costs in physically separate clusters.
Copy file name to clipboardExpand all lines: articles/aks/operator-best-practices-scheduler.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@ The involuntary disruptions can be mitigated by using multiple replicas of your
76
76
77
77
If a cluster is to be upgraded or a deployment template updated, the Kubernetes scheduler makes sure additional pods are scheduled on other nodes before the voluntary disruption events can continue. The scheduler waits before a node is rebooted until the defined number of pods are successfully scheduled on other nodes in the cluster.
78
78
79
-
Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set as assigned the label `app: nginx-frontend`. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a *PodDisruptionBudget* object defines these requirements:
79
+
Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set are assigned the label `app: nginx-frontend`. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a *PodDisruptionBudget* object defines these requirements:
Copy file name to clipboardExpand all lines: articles/azure-functions/functions-monitoring.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ For a function app to send data to Application Insights, it needs to know the in
27
27
28
28
### New function app in the portal
29
29
30
-
When you [create your function app in the Azure portal](functions-create-first-azure-function.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in nearest region.
30
+
When you [create your function app in the Azure portal](functions-create-first-azure-function.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
31
31
32
32
To review the Application Insights resource being created, select it to expand the **Application Insights** window. You can change the **New resource name** or choose a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you want to store your data.
33
33
@@ -70,7 +70,7 @@ You can see that both pages have a **Run in Application Insights** link to the A
70
70
71
71

72
72
73
-
The following query is displayed. You can see that the invocation list is limited to the last 30 days. The list shows no more than 20 rows (`where timestamp > ago(30d) | take 20`). The invocation details list is for the last 30 days with no limit.
73
+
The following query is displayed. You can see that the query results are limited to the last 30 days (`where timestamp > ago(30d)`). In addition, the results show no more than 20 rows (`take 20`). In contrast, the invocation details list for your function is for the last 30 days with no limit.
|**Servers**| View resource utilization and throughput per server. This data can be useful for debugging scenarios where functions are bogging down your underlying resources. Servers are referred to as **Cloud role instances**. |
96
96
|**[Metrics](../azure-monitor/app/metrics-explorer.md)**| Create charts and alerts that are based on metrics. Metrics include the number of function invocations, execution time, and success rates. |
97
-
|**[Live Metrics Stream](../azure-monitor/app/live-stream.md)**| View metrics data as it's created in real time. |
97
+
|**[Live Metrics Stream](../azure-monitor/app/live-stream.md)**| View metrics data as it's created in near real-time. |
98
98
99
99
## Query telemetry data
100
100
@@ -333,7 +333,7 @@ You can write logs in your function code that appear as traces in Application In
333
333
334
334
Use an [ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.ilogger) parameter in your functions instead of a `TraceWriter` parameter. Logs created by using `TraceWriter` go to Application Insights, but `ILogger` lets you do [structured logging](https://softwareengineering.stackexchange.com/questions/312197/benefits-of-structured-logging-vs-basic-logging).
335
335
336
-
With an `ILogger` object, you call `Log<level>`[extension methods on ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.loggerextensions#methods) to create logs. The following code writes `Information` logs with category "Function."
336
+
With an `ILogger` object, you call `Log<level>`[extension methods on ILogger](https://docs.microsoft.com/dotnet/api/microsoft.extensions.logging.loggerextensions#methods) to create logs. The following code writes `Information` logs with category "Function.<YOUR_FUNCTION_NAME>.User."
Don't call `TrackRequest` or `StartOperation<RequestTelemetry>` because you'llseeduplicaterequestsforafunctioninvocation. TheFunctionsruntimeautomaticallytracksrequests.
559
559
560
-
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
560
+
Don't set `telemetryClient.Context.Operation.Id`. This global setting causes incorrect correlation when many functions are running simultaneously. Instead, create a new telemetry instance (`DependencyTelemetry`, `EventTelemetry`) and modify its `Context` property. Then pass in the telemetry instance to the corresponding `Track` method on `TelemetryClient` (`TrackDependency()`, `TrackEvent()`, `TrackMetric()`). This method ensures that the telemetry has the correct correlation details for the current function invocation.
561
561
562
562
## Log custom telemetry in JavaScript functions
563
563
@@ -586,7 +586,7 @@ The `tagOverrides` parameter sets the `operation_Id` to the function's invocatio
***Built-inlogstreaming**:theAppServiceplatformletsyouviewastreamofyourapplicationlogfiles. Thisisequivalenttotheoutputseen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan.
Copy file name to clipboardExpand all lines: articles/azure-functions/set-runtime-version.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.date: 11/26/2018
8
8
9
9
# How to target Azure Functions runtime versions
10
10
11
-
A function app runs on a specific version of the Azure Functions runtime. There are two major versions: [1.xand 2.x](functions-versions.md), with version 3.x in preview. By default, function apps that are created version 2.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
11
+
A function app runs on a specific version of the Azure Functions runtime. There are three major versions: [1.x, 2.x, and 3.x](functions-versions.md). By default, function apps are created in version 2.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
Copy file name to clipboardExpand all lines: articles/cost-management-billing/manage/understand-vm-reservation-charges.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ When you shut down a resource, the reservation discount automatically applies to
41
41
42
42

43
43
44
-
1. Any usage that's above the reservation line gets charged at the regular pay-as-you-go rates. You're not charge for any usage below the reservations line, since it has been already paid as part of reservation purchase.
44
+
1. Any usage that's above the reservation line gets charged at the regular pay-as-you-go rates. You're not charged for any usage below the reservations line, since it has been already paid as part of reservation purchase.
45
45
2. In hour 1, instance 1 runs for 0.75 hours and instance 2 runs for 0.5 hours. Total usage for hour 1 is 1.25 hours. You're charged the pay-as-you-go rates for the remaining 0.25 hours.
46
46
3. For hour 2 and hour 3, both instances ran for 1 hour each. One instance is covered by the reservation and the other is charged at pay-as-you-go rates.
47
47
4. For hour 4, instance 1 runs for 0.5 hours and instance 2 runs for 1 hour. Instance 1 is fully covered by the reservation and 0.5 hours of instance 2 is covered. You’re charged the pay-as-you-go rate for the remaining 0.5 hours.
Copy file name to clipboardExpand all lines: articles/data-explorer/data-factory-template.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ ms.date: 09/08/2019
16
16
17
17
Azure Data Explorer is a fast, fully managed, data-analytics service. It offers real-time analysis on large volumes of data that stream from many sources, such as applications, websites, and IoT devices.
18
18
19
-
Azure Data Factory is a fully managed, cloud-based, data-integration service. You can use it to populate your Azure Data Explorer database with data from your existing system. And it can help you save time when you're building analytics solutions.
19
+
To copy data from a database in Oracle Server, Netezza, Teradata, or SQL Server to Azure Data Explorer, you have to load huge amounts of data from multiple tables. Usually, the data has to be partitioned in each table so that you can load rows with multiple threads in parallel from a single table. This article describes a template to use in these scenarios.
20
20
21
21
[Azure Data Factory templates](/azure/data-factory/solution-templates-introduction) are predefined Data Factory pipelines. These templates can help you get started quickly with Data Factory and reduce development time on data integration projects.
Copy file name to clipboardExpand all lines: articles/openshift/howto-aad-app-configuration.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,8 +86,8 @@ For details on creating a new Azure AD application, see [Register an app with th
86
86
## Add API permissions
87
87
88
88
1. In the **Manage** section click **API permissions**.
89
-
2. Click **Add permission** and select **Azure Active Directory Graph** then **Delegated permissions**
90
-
3. Expand **User** on the list below and make sure**User.Read** is enabled.
89
+
2. Click **Add permission** and select **Azure Active Directory Graph** then **Delegated permissions**.
90
+
3. Expand **User** on the list below and enable the**User.Read**permission. If **User.Read**is enabled by default, ensure that it is the **Azure Active Directory Graph** permission **User.Read**, *not* the **Microsoft Graph** permission **User.Read**.
91
91
4. Scroll up and select **Application permissions**.
92
92
5. Expand **Directory** on the list below and enable **Directory.ReadAll**
93
93
6. Click **Add permissions** to accept the changes.
0 commit comments