You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/governance/create-access-review.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,6 +95,9 @@ If you are reviewing access to an application, then before creating the review,
95
95
96
96
If you choose either **Managers of users** or **Group owner(s)**, you can also specify a fallback reviewer. Fallback reviewers are asked to do a review when the user has no manager specified in the directory or if the group doesn't have an owner.
97
97
98
+
> [!NOTE]
99
+
> In a team or group access review, only the group owners (at the time the review starts) are considered as reviewers. During the course of a review, if the list of group owners is updated, new group owners will not be considered reviewers as well as old group owners will still be considered reviewers. However, in the case of a recurring review, any changes on the group owners list will be considered in the next instance of that review.
100
+
98
101
>[!IMPORTANT]
99
102
> For PIM for Groups (Preview), you must select **Group owner(s)**. It is mandatory to assign at least one fallback reviewer to the review. The review will only assign active owner(s) as the reviewer(s). Eligible owners are not included. If there are no active owners when the review begins, the fallback reviewer(s) will be assigned to the review.
b. In the **Reply URL** textbox, type the value provided by [Easy Metrics Auth0 Connector support team](mailto:[email protected]).
70
68
71
-
c. In the **Sign on URL** textbox, type the URL:
72
-
`https://azureapp.gcp-easymetrics.com`
69
+
c. In the **Sign on URL** textbox, type the value provided by [Easy Metrics Auth0 Connector support team](mailto:[email protected]).
73
70
74
71
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
75
72
@@ -100,4 +97,4 @@ In this section, you test your Azure AD single sign-on configuration with follow
100
97
101
98
## Next steps
102
99
103
-
Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
100
+
Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
Copy file name to clipboardExpand all lines: articles/cosmos-db/gremlin/introduction.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ The API for Gremlin has added benefits of being built on Azure Cosmos DB:
33
33
34
34
-**Elastically scalable throughput and storage**: Graphs in the real world need to scale beyond the capacity of a single server. Azure Cosmos DB supports horizontally scalable graph databases that can have an unlimited size in terms of storage and provisioned throughput. As the graph database scale grows, the data is automatically distributed using [graph partitioning](./partitioning.md).
35
35
36
-
-**Multi-region replication**: Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides automatic regional failover mechanism. This mechanism can ensure the continuity of your application in the rare case of a service interruption in a region.
36
+
-**Multi-region replication**: Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides a service-managed regional failover mechanism. This mechanism can ensure the continuity of your application in the rare case of a service interruption in a region.
37
37
38
38
-**Fast queries and traversals with the most widely adopted graph query standard**: Store heterogeneous vertices and edges and query them through a familiar Gremlin syntax. Gremlin is an imperative, functional query language that provides a rich interface to implement common graph algorithms. The API for Gremlin enables rich real-time queries and traversals without the need to specify schema hints, secondary indexes, or views. For more information, see [query graphs by using Gremlin](tutorial-query.md).
Copy file name to clipboardExpand all lines: articles/cosmos-db/high-availability.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ Single-region accounts might lose availability after a regional outage. To ensur
90
90
Service-managed failover allows Azure Cosmos DB to fail over the write region of a multiple-region account in order to preserve availability at the cost of data loss, as described earlier in the [Durability](#durability) section. Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application. For instructions on how to enable multiple read regions and service-managed failover, see [Manage an Azure Cosmos DB account using the Azure portal](./how-to-manage-database-account.md).
91
91
92
92
> [!IMPORTANT]
93
-
> We strongly recommend that you configure the Azure Cosmos DB accounts used for production workloads to *enable service-managed failover*. This configuration enables Azure Cosmos DB to fail over the account databases to available regions automatically.
93
+
> We strongly recommend that you configure the Azure Cosmos DB accounts used for production workloads to *enable service-managed failover*. This configuration enables Azure Cosmos DB to fail over the account databases to available regions.
94
94
>
95
95
> In the absence of this configuration, the account will experience loss of write availability for the whole duration of the write region outage. Manual failover won't succeed because of a lack of region connectivity.
96
96
@@ -161,25 +161,25 @@ Multiple-region accounts experience different behaviors depending on the followi
161
161
162
162
### Additional information on read region outages
163
163
164
-
* The affected region is automatically disconnected and marked offline. The [Azure Cosmos DB SDKs](nosql/sdk-dotnet-v3.md) redirect read calls to the next available region in the preferred region list.
164
+
* The affected region is disconnected and marked offline. The [Azure Cosmos DB SDKs](nosql/sdk-dotnet-v3.md) redirect read calls to the next available region in the preferred region list.
165
165
166
166
* If none of the regions in the preferred region list are available, calls automatically fall back to the current write region.
167
167
168
-
* No changes are required in your application code to handle read region outages. When the affected read region is back online, it automatically syncs with the current write region and is available again to serve read requests.
168
+
* No changes are required in your application code to handle read region outages. When the affected read region is back online, it syncs with the current write region and is available again to serve read requests after it has fully caught up.
169
169
170
170
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, Azure Cosmos DB continues to honor read consistency guarantees.
171
171
172
172
* Even in a rare and unfortunate event where an Azure write region is permanently irrecoverable, there's no data loss if your multiple-region Azure Cosmos DB account is configured with strong consistency. A multiple-region Azure Cosmos DB account has the durability characteristics specified earlier in the [Durability](#durability) section.
173
173
174
174
### Additional information on write region outages
175
175
176
-
* During a write region outage, the Azure Cosmos DB account automatically promotes a secondary region to be the new primary write region when *automatic (service-managed) failover* is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority that you specify.
176
+
* During a write region outage, the Azure Cosmos DB account promotes a secondary region to be the new primary write region when *service-managed failover* is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority that you specify.
177
177
178
178
* Manual failover shouldn't be triggered and won't succeed in the presence of an outage of the source or destination region. The reason is that the failover procedure includes a consistency check that requires connectivity between the regions.
179
179
180
180
* When the previously affected region is back online, any write data that wasn't replicated when the region failed is made available through the [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflict feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos DB container as appropriate.
181
181
182
-
* After the previously affected write region recovers, it becomes automatically available as a read region. You can switch back to the recovered region as the write region by using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover). There is *no data or availability loss* before, while, or after you switch the write region. Your application continues to be highly available.
182
+
* After the previously affected write region recovers, it will become available as a read region. You can switch back to the recovered region as the write region by using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover). There is *no data or availability loss* before, while, or after you switch the write region. Your application continues to be highly available.
183
183
184
184
## SLAs
185
185
@@ -220,9 +220,9 @@ For single-region accounts, clients experience a loss of read and write availabi
220
220
221
221
| Write regions | Service-managed failover | What to expect | What to do |
222
222
| -- | -- | -- | -- |
223
-
| Single write region | Not enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you use strong consistency, an outage in a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. <br/><br/> Azure Cosmos DB restores write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
224
-
| Single write region | Enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you're using strong consistency, the outage of a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, you can move the write region back to the original region and readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL can also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
225
-
| Multiple write regions | Not applicable | Recently updated data in the failed region might be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of less than 15 minutes. Bounded staleness guarantees fewer than *K* updates or *T* seconds, depending on the configuration. If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/><br/> When the outage is over, you can readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers unreplicated data in the failed region. This automatic recovery uses the conflict resolution method that you configure for accounts that use the API for NoSQL. For accounts that use other APIs, this automatic recovery uses *last write wins*. |
223
+
| Single write region | Not enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you use strong consistency, an outage in a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. <br/><br/> Azure Cosmos DB restores write availability when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, readjust provisioned RUs as appropriate. |
224
+
| Single write region | Enabled | If there's an outage in a read region when you're not using strong consistency, all clients redirect to other regions. There's no read or write availability loss, and there's no data loss. When you're using strong consistency, the outage of a read region can affect write availability if fewer than two read regions remain.<br/><br/> If there's an outage in the write region, clients experience write availability loss until Azure Cosmos DB elects a new region as the new write region according to your preferences. If you didn't select strong consistency, the service might not replicate some data to the remaining active regions. This replication depends on the selected [consistency level](consistency-levels.md#rto). If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/><br/> *Don't* trigger a manual failover during the outage, because it can't succeed. <br/><br/> When the outage is over, you can move the write region back to the original region and readjust provisioned RUs as appropriate. Accounts that use the API for NoSQL can also recover the unreplicated data in the failed region from your [conflict feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
225
+
| Multiple write regions | Not applicable | Recently updated data in the failed region might be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of less than 15 minutes. Bounded staleness guarantees fewer than *K* updates or *T* seconds, depending on the configuration. If the affected region suffers permanent data loss, you might lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/><br/> When the outage is over, you can readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB recovers unreplicated data in the failed region. This recovery uses the conflict resolution method that you configure for accounts that use the API for NoSQL. For accounts that use other APIs, this recovery uses *last write wins*. |
Copy file name to clipboardExpand all lines: articles/cosmos-db/how-to-manage-database-account.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ After an Azure Cosmos DB account is configured for service-managed failover, the
80
80
81
81
:::image type="content" source="./media/how-to-manage-database-account/replicate-data-globally.png" alt-text="Screenshot showing the Replicate data globally menu.":::
82
82
83
-
1. On the **Service-Managed Failover** pane, make sure that **Enable Automatic Failover** is set to **ON**.
83
+
1. On the **Service-Managed Failover** pane, make sure that **Enable Service-Managed Failover** is set to **ON**.
84
84
85
85
1. To modify the failover priority, drag the read regions via the three dots on the left side of the row that appear when you hover over them.
0 commit comments