You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/best-practices-performance-scale-large.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Performance and scaling best practices for large workloads in Azure Kuber
3
3
titleSuffix: Azure Kubernetes Service
4
4
description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS).
5
5
ms.topic: conceptual
6
-
ms.date: 11/03/2023
6
+
ms.date: 01/18/2024
7
7
---
8
8
9
9
# Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS)
@@ -33,10 +33,10 @@ Kubernetes has a multi-dimensional scale envelope with each resource type repres
33
33
34
34
The control plane manages all the resource scaling in the cluster, so the more you scale the cluster within a given dimension, the less you can scale within other dimensions. For example, running hundreds of thousands of pods in an AKS cluster impacts how much pod churn rate (pod mutations per second) the control plane can support.
35
35
36
-
The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports two control plane tiers as part of the Base SKU: the Free tier and the Standard tier. For more information, see [Freeand Standard pricing tiers for AKS cluster management][free-standard-tier].
36
+
The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports three control plane tiers as part of the Base SKU: Free, Standard, and Premium tier. For more information, see [Free, Standard, and Premium pricing tiers for AKS cluster management][pricing-tiers].
37
37
38
38
> [!IMPORTANT]
39
-
> We highly recommend using the Standard tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
39
+
> We highly recommend using the Standard or Premium tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
40
40
>
41
41
> * Up to 5,000 nodes per AKS cluster
42
42
> * 200,000 pods per AKS cluster (with Azure CNI Overlay)
@@ -115,7 +115,7 @@ As you scale your AKS clusters to larger scale points, keep the following node p
Copy file name to clipboardExpand all lines: articles/api-management/backends.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -89,6 +89,10 @@ Starting in API version 2023-03-01 preview, API Management exposes a [circuit br
89
89
90
90
The backend circuit breaker is an implementation of the [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker) to allow the backend to recover from overload situations. It augments general [rate-limiting](rate-limit-policy.md) and [concurrency-limiting](limit-concurrency-policy.md) policies that you can implement to protect the API Management gateway and your backend services.
91
91
92
+
> [!NOTE]
93
+
> * Currently, the backend circuit breaker isn't supported in the **Consumption** tier of API Management.
94
+
> * Because of the distributed nature of the API Management architecture, circuit breaker tripping rules are approximate. Different instances of the gateway do not synchronize and will apply circuit breaker rules based on the information on the same instance.
95
+
92
96
### Example
93
97
94
98
Use the API Management [REST API](/rest/api/apimanagement/backend) or a Bicep or ARM template to configure a circuit breaker in a backend. In the following example, the circuit breaker in *myBackend* in the API Management instance *myAPIM* trips when there are three or more `5xx` status codes indicating server errors in a day. The circuit breaker resets after one hour.
@@ -177,7 +181,9 @@ Use a backend pool for scenarios such as the following:
177
181
To create a backend pool, set the `type` property of the backend to `pool` and specify a list of backends that make up the pool.
178
182
179
183
> [!NOTE]
180
-
> Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
184
+
> * Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
185
+
> * Because of the distributed nature of the API Management architecture, backend load balancing is approximate. Different instances of the gateway do not synchronize and will load balance based on the information on the same instance.
Copy file name to clipboardExpand all lines: articles/bastion/bastion-faq.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -241,6 +241,10 @@ Make sure the user has **read** access to both the VM, and the peered VNet. Addi
241
241
|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action|
242
242
|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
243
243
244
+
### I am connecting to a VM using a JIT policy, do I need additional permissions?
245
+
246
+
If user is connecting to a VM using a JIT policy, there is no additional permissions needed. For more information on connecting to a VM using a JIT policy, see [Enable just-in-time access on VMs](../defender-for-cloud/just-in-time-access-usage.md)
247
+
244
248
### My privatelink.azure.com can't resolve to management.privatelink.azure.com
245
249
246
250
This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution.
Copy file name to clipboardExpand all lines: articles/connectors/connectors-create-api-azureblobstorage.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ services: logic-apps
5
5
ms.suite: integration
6
6
ms.reviewer: estfan, azla
7
7
ms.topic: how-to
8
-
ms.date: 01/10/2024
8
+
ms.date: 01/18/2024
9
9
tags: connectors
10
10
---
11
11
@@ -39,6 +39,13 @@ The Azure Blob Storage connector has different versions, based on [logic app typ
39
39
40
40
1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
41
41
42
+
- Azure Blob Storage trigger limits
43
+
44
+
- The *managed* connector trigger is limited to 30,000 blobs in the polling virtual folder.
45
+
- The *built-in* connector trigger is limited to 10,000 blobs in the entire polling container.
46
+
47
+
If the limit is exceeded, a new blob might not be able to trigger the workflow, so the trigger is skipped.
48
+
42
49
## Prerequisites
43
50
44
51
- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Copy file name to clipboardExpand all lines: articles/healthcare-apis/dicom/change-feed-overview.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,17 @@
1
1
---
2
-
title: Overview of DICOM change feed - Azure Health Data Services
3
-
description: In this article, you learn the concepts of DICOM change feed.
2
+
title: Change feed overview for the DICOM service in Azure Health Data Services
3
+
description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way.
4
4
author: mmitrik
5
5
ms.service: healthcare-apis
6
6
ms.subservice: dicom
7
7
ms.topic: conceptual
8
-
ms.date: 10/9/2023
8
+
ms.date: 1/18/2024
9
9
ms.author: mmitrik
10
10
---
11
11
12
12
# Change feed overview
13
13
14
-
The change feed provides logs of all the changes that occur in the DICOM® service. The change feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The change feed offers the ability to go through the history of DICOM service and acts upon the creates and deletes in the service.
14
+
The change feed provides logs of all the changes that occur in the DICOM® service. The change feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The change feed offers the ability to go through the history of DICOM service and acts upon the creates, updates, and deletes in the service.
15
15
16
16
Client applications can read these logs at any time in batches of any size. The change feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
17
17
@@ -38,7 +38,7 @@ Sequence | long | The unique ID per change event
38
38
StudyInstanceUid | string | The study instance UID
39
39
SeriesInstanceUid | string | The series instance UID
40
40
SopInstanceUid | string | The sop instance UID
41
-
Action | string | The action that was performed - either `create` or `delete`
41
+
Action | string | The action that was performed - either `create`, `update`, or `delete`
42
42
Timestamp | datetime | The date and time the action was performed in UTC
43
43
State | string | [The current state of the metadata](#states)
44
44
Metadata | object | Optionally, the current DICOM metadata if the instance exists
@@ -48,12 +48,12 @@ Metadata | object | Optionally, the current DICOM metadata if the
48
48
State | Description
49
49
:------- | :---
50
50
current | This instance is the current version.
51
-
replaced | This instance has been replaced by a new version.
52
-
deleted | This instance has been deleted and is no longer available in the service.
51
+
replaced | This instance is replaced with a new version.
52
+
deleted | This instance is deleted and is no longer available in the service.
53
53
54
54
## Change feed
55
55
56
-
The change feed resource is a collection of events that have occurred within the DICOM server.
56
+
The change feed resource is a collection of events that occurred within the DICOM server.
57
57
58
58
### Version 2
59
59
@@ -152,7 +152,7 @@ limit | int | The maximum value of the sequence number relative t
152
152
includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
153
153
154
154
## Latest change feed
155
-
The latest change feed resource represents the latest event that has occurred within the DICOM Server.
155
+
The latest change feed resource represents the latest event that occurred within the DICOM server.
@@ -208,15 +208,15 @@ includeMetadata | bool | Indicates whether or not to include the metadata | `tru
208
208
2. On some regular polling interval, the application performs the following actions:
209
209
* Fetches the latest sequence number from the `/changefeed/latest` endpoint
210
210
* Fetches the next set of changes for processing by querying the change feed with the current offset
211
-
* For example, if the application has currently processed up to sequence number 15 and it only wants to process at most 5 events at once, then it should use the URL `/changefeed?offset=15&limit=5`
211
+
* For example, if the application processed up to sequence number 15 and it only wants to process at most five events at once, then it should use the URL `/changefeed?offset=15&limit=5`
212
212
* Processes any entries return by the `/changefeed` resource
213
213
* Updates its current sequence number to either:
214
214
1. The maximum sequence number returned by the `/changefeed` resource
215
215
2. The `offset` + `limit` if no change events were returned from the `/changefeed` resource, but the latest sequence number returned by `/changefeed/latest` is greater than the current sequence number used for `offset`
216
216
217
217
### Other potential usage patterns
218
218
219
-
Change feed support is wellsuited for scenarios that process data based on objects that have changed. For example, it can be used to:
219
+
Change feed support is well-suited for scenarios that process data based on objects that are changed. For example, it can be used to:
220
220
221
221
* Build connected application pipelines like ML that react to change events or schedule executions based on created or deleted instance.
222
222
* Extract business analytics insights and metrics, based on changes that occur to your objects.
0 commit comments