Skip to content

Commit 6b39ca7

Browse files
authored
Merge pull request #186323 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents 5d1b494 + 86cf8b9 commit 6b39ca7

File tree

9 files changed

+65
-78
lines changed

9 files changed

+65
-78
lines changed

articles/active-directory/authentication/tutorial-enable-sspr.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,10 @@ In this tutorial you learn how to:
2929
> * Set up authentication methods and registration options
3030
> * Test the SSPR process as a user
3131
32+
## Video tutorial
33+
34+
You can also follow along in a related video: [How to enable and configure SSPR in Azure AD](https://www.youtube.com/embed/rA8TvhNcCvQ?azure-portal=true).
35+
3236
## Prerequisites
3337

3438
To finish this tutorial, you need the following resources and privileges:

articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,10 +45,10 @@ To configure the integration of GitHub Enterprise Managed User into Azure AD, yo
4545

4646
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
4747
1. On the left navigation pane, select the **Azure Active Directory** service.
48-
1. Navigate to **Enterprise Applications** and then select **All Applications**.
48+
1. Navigate to **Enterprise Applications**.
4949
1. To add new application, select **New application**.
50-
1. In the **Add from the gallery** section, type **GitHub Enterprise Managed User** in the search box.
51-
1. Select **GitHub Enterprise Managed User** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
50+
1. Type **GitHub Enterprise Managed User** in the search box.
51+
1. Select **GitHub Enterprise Managed User** from results panel and then click on the **Create** button. Wait a few seconds while the app is added to your tenant.
5252

5353

5454
## Configure and test Azure AD SSO for GitHub Enterprise Managed User
@@ -133,4 +133,4 @@ In this section, you'll take the information provided from AAD above and enter t
133133

134134
## Next steps
135135

136-
GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.
136+
GitHub Enterprise Managed User **requires** all accounts to be created through automatic user provisioning, you can find more details [here](./github-enterprise-managed-user-provisioning-tutorial.md) on how to configure automatic user provisioning.

articles/applied-ai-services/metrics-advisor/includes/quickstarts/csharp.md

Lines changed: 46 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Build succeeded.
5959
If you are using an IDE other than Visual Studio you can install the Metrics Advisor client library for .NET with the following command:
6060

6161
```console
62-
dotnet add package Azure.AI.MetricsAdvisor --version 1.0.0
62+
dotnet add package Azure.AI.MetricsAdvisor --version 1.1.0
6363
```
6464

6565
> [!TIP]
@@ -151,64 +151,45 @@ Replace `connection_String` with your own SQL server connection string, and repl
151151
string sqlServerConnectionString = "<connection_String>";
152152
string sqlServerQuery = "<query>";
153153

154-
var dataFeedName = "Sample data feed";
155-
var dataFeedSource = new SqlServerDataFeedSource(sqlServerConnectionString, sqlServerQuery);
156-
var dataFeedGranularity = new DataFeedGranularity(DataFeedGranularityType.Daily);
154+
var dataFeed = new DataFeed();
155+
dataFeed.Name = "Sample data feed";
157156

158-
var dataFeedMetrics = new List<DataFeedMetric>()
159-
{
160-
new DataFeedMetric("cost"),
161-
new DataFeedMetric("revenue")
162-
};
163-
var dataFeedDimensions = new List<DataFeedDimension>()
164-
{
165-
new DataFeedDimension("category"),
166-
new DataFeedDimension("city")
167-
};
168-
var dataFeedSchema = new DataFeedSchema(dataFeedMetrics)
169-
{
170-
DimensionColumns = dataFeedDimensions
171-
};
172-
173-
var ingestionStartTime = DateTimeOffset.Parse("2020-01-01T00:00:00Z");
174-
var dataFeedIngestionSettings = new DataFeedIngestionSettings(ingestionStartTime);
175-
176-
var dataFeed = new DataFeed()
177-
{
178-
Name = dataFeedName,
179-
DataSource = dataFeedSource,
180-
Granularity = dataFeedGranularity,
181-
Schema = dataFeedSchema,
182-
IngestionSettings = dataFeedIngestionSettings,
183-
};
157+
dataFeed.DataSource = new SqlServerDataFeedSource(sqlServerConnectionString, sqlServerQuery);
158+
dataFeed.Granularity = new DataFeedGranularity(DataFeedGranularityType.Daily);
184159

185-
Response<string> response = await adminClient.CreateDataFeedAsync(dataFeed);
160+
dataFeed.Schema = new DataFeedSchema();
161+
dataFeed.Schema.MetricColumns.Add(new DataFeedMetric("cost"));
162+
dataFeed.Schema.MetricColumns.Add(new DataFeedMetric("revenue"));
163+
dataFeed.Schema.DimensionColumns.Add(new DataFeedDimension("category"));
164+
dataFeed.Schema.DimensionColumns.Add(new DataFeedDimension("city"));
186165

187-
string dataFeedId = response.Value;
166+
dataFeed.IngestionSettings = new DataFeedIngestionSettings(DateTimeOffset.Parse("2020-01-01T00:00:00Z"));
188167

189-
Console.WriteLine($"Data feed ID: {dataFeedId}");
190168

191-
// Note that only the ID of the data feed is known at this point. You can perform another
192-
// service call to GetDataFeedAsync or GetDataFeed to get more information, such as status,
193-
// created time, the list of administrators, or the metric IDs.
169+
Response<DataFeed> response = await adminClient.CreateDataFeedAsync(dataFeed);
194170

195-
Response<DataFeed> response = await adminClient.GetDataFeedAsync(dataFeedId);
171+
DataFeed createdDataFeed = response.Value;
196172

197-
DataFeed dataFeed = response.Value;
198-
199-
Console.WriteLine($"Data feed status: {dataFeed.Status.Value}");
200-
Console.WriteLine($"Data feed created time: {dataFeed.CreatedTime.Value}");
173+
Console.WriteLine($"Data feed ID: {createdDataFeed.Id}");
174+
Console.WriteLine($"Data feed status: {createdDataFeed.Status.Value}");
175+
Console.WriteLine($"Data feed created time: {createdDataFeed.CreatedOn.Value}");
201176

202177
Console.WriteLine($"Data feed administrators:");
203-
foreach (string admin in dataFeed.AdministratorEmails)
178+
foreach (string admin in createdDataFeed.Administrators)
204179
{
205180
Console.WriteLine($" - {admin}");
206181
}
207182

208183
Console.WriteLine($"Metric IDs:");
209-
foreach (DataFeedMetric metric in dataFeed.Schema.MetricColumns)
184+
foreach (DataFeedMetric metric in createdDataFeed.Schema.MetricColumns)
210185
{
211-
Console.WriteLine($" - {metric.MetricName}: {metric.MetricId}");
186+
Console.WriteLine($" - {metric.Name}: {metric.Id}");
187+
}
188+
189+
Console.WriteLine($"Dimensions:");
190+
foreach (DataFeedDimension dimension in createdDataFeed.Schema.DimensionColumns)
191+
{
192+
Console.WriteLine($" - {dimension.Name}");
212193
}
213194
```
214195

@@ -254,29 +235,31 @@ Create an anomaly detection configuration to tell the service which data points
254235
string metricId = "<metricId>";
255236
string configurationName = "Sample anomaly detection configuration";
256237

257-
var hardThresholdSuppressCondition = new SuppressCondition(1, 100);
258-
var hardThresholdCondition = new HardThresholdCondition(AnomalyDetectorDirection.Down, hardThresholdSuppressCondition)
238+
var detectionConfiguration = new AnomalyDetectionConfiguration()
259239
{
260-
LowerBound = 5.0
240+
MetricId = metricId,
241+
Name = configurationName,
242+
WholeSeriesDetectionConditions = new MetricWholeSeriesDetectionCondition()
261243
};
262244

263-
var smartDetectionSuppressCondition = new SuppressCondition(4, 50);
264-
var smartDetectionCondition = new SmartDetectionCondition(10.0, AnomalyDetectorDirection.Up, smartDetectionSuppressCondition);
245+
var detectCondition = detectionConfiguration.WholeSeriesDetectionConditions;
265246

266-
var detectionCondition = new MetricWholeSeriesDetectionCondition()
247+
var hardSuppress = new SuppressCondition(1, 100);
248+
detectCondition.HardThresholdCondition = new HardThresholdCondition(AnomalyDetectorDirection.Down, hardSuppress)
267249
{
268-
HardThresholdCondition = hardThresholdCondition,
269-
SmartDetectionCondition = smartDetectionCondition,
270-
CrossConditionsOperator = DetectionConditionsOperator.Or
250+
LowerBound = 5.0
271251
};
272252

273-
var detectionConfiguration = new AnomalyDetectionConfiguration(metricId, configurationName, detectionCondition);
253+
var smartSuppress = new SuppressCondition(4, 50);
254+
detectCondition.SmartDetectionCondition = new SmartDetectionCondition(10.0, AnomalyDetectorDirection.Up, smartSuppress);
255+
256+
detectCondition.ConditionOperator = DetectionConditionOperator.Or;
274257

275-
Response<string> response = await adminClient.CreateDetectionConfigurationAsync(detectionConfiguration);
258+
Response<AnomalyDetectionConfiguration> response = await adminClient.CreateDetectionConfigurationAsync(detectionConfiguration);
276259

277-
string detectionConfigurationId = response.Value;
260+
AnomalyDetectionConfiguration createdDetectionConfiguration = response.Value;
278261

279-
Console.WriteLine($"Anomaly detection configuration ID: {detectionConfigurationId}");
262+
Console.WriteLine($"Anomaly detection configuration ID: {createdDetectionConfiguration.Id}");
280263
```
281264

282265
### Create a hook
@@ -285,19 +268,16 @@ Metrics Advisor supports the `EmailNotificationHook` and `WebNotificationHook` c
285268

286269
```csharp
287270
string hookName = "Sample hook";
288-
var emailsToAlert = new List<string>()
289-
{
290-
291-
292-
};
271+
var emailHook = new EmailNotificationHook(hookName);
293272

294-
var emailHook = new EmailNotificationHook(hookName, emailsToAlert);
273+
emailHook.EmailsToAlert.Add("[email protected]");
274+
emailHook.EmailsToAlert.Add("[email protected]");
295275

296-
Response<string> response = await adminClient.CreateHookAsync(emailHook);
276+
Response<NotificationHook> response = await adminClient.CreateHookAsync(emailHook);
297277

298-
string hookId = response.Value;
278+
NotificationHook createdHook = response.Value;
299279

300-
Console.WriteLine($"Hook ID: {hookId}");
280+
Console.WriteLine($"Hook ID: {createdHook.Id}");
301281
```
302282

303283
## Create an alert configuration

articles/cosmos-db/how-to-configure-private-endpoints.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -689,6 +689,8 @@ The following limitations apply when you're using Private Link with an Azure Cos
689689

690690
* A network administrator should be granted at least the `Microsoft.DocumentDB/databaseAccounts/PrivateEndpointConnectionsApproval/action` permission at the Azure Cosmos account scope to create automatically approved private endpoints.
691691

692+
* Currently, you can't approve a rejected private endpoint connection. Instead, re-create the private endpoint to resume the private connectivity. The Cosmos DB private link service automatically approves the re-created private endpoint.
693+
692694
### Limitations to private DNS zone integration
693695

694696
Unless you're using a private DNS zone group, DNS records in the private DNS zone are not removed automatically when you delete a private endpoint or you remove a region from the Azure Cosmos account. You must manually remove the DNS records before:

articles/postgresql/flexible-server/concepts-high-availability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ For flexible servers configured with high availability, these maintenance activi
6868
Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recovery any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations.
6969

7070
>[!NOTE]
71-
> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss.The recovery tome objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
71+
> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
7272
7373
After the failover, while a new standby server is being provisioned, applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover.
7474

articles/storage/common/sas-expiration-policy.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ The SAS expiration period appears in the console output.
105105

106106
To log the creation of a SAS that is valid over a longer interval than the SAS expiration policy recommends, first create a diagnostic setting that sends logs to an Azure Log Analytics workspace. For more information, see [Send logs to Azure Log Analytics](../blobs/monitor-blob-storage.md#send-logs-to-azure-log-analytics).
107107

108-
Next, use an Azure Monitor log query to determine whether a SAS has expired. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
108+
Next, use an Azure Monitor log query to monitor whether policy has been violated. Create a new query in your Log Analytics workspace, add the following query text, and press **Run**.
109109

110110
```kusto
111111
StorageBlobLogs | where SasExpiryStatus startswith "Policy Violated"
@@ -116,4 +116,4 @@ StorageBlobLogs | where SasExpiryStatus startswith "Policy Violated"
116116
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md)
117117
- [Create a service SAS](/rest/api/storageservices/create-service-sas)
118118
- [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)
119-
- [Create an account SAS](/rest/api/storageservices/create-account-sas)
119+
- [Create an account SAS](/rest/api/storageservices/create-account-sas)

articles/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ On the **Resource** tab:
9191
To access the linked storage with the storage explorer in Azure Synapse Analytics Studio workspace, you must create one private endpoint. The steps for this are similar to those of step 3.
9292

9393
On the **Resource** tab:
94-
* For **Resource type**, select **Microsoft.Synapse/storageAccounts**.
94+
* For **Resource type**, select **Microsoft.Storage/storageAccounts**.
9595
* For **Resource**, select the storage account name that you created previously.
9696
* For **Target sub-resource**, select the endpoint type:
9797
* **blob** is for Azure Blob Storage.

articles/virtual-machines/linux/create-upload-centos.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -417,7 +417,7 @@ Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
417417
* Use a cloud-init directive baked into the image that will do this every time the VM is created:
418418
419419
```console
420-
echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
420+
echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
421421
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
422422
#cloud-config
423423
# Generated by Azure cloud image build
@@ -433,7 +433,7 @@ Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
433433
filesystem: swap
434434
mounts:
435435
- ["ephemeral0.1", "/mnt"]
436-
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
436+
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
437437
EOF
438438
```
439439
@@ -454,4 +454,4 @@ Preparing a CentOS 7 virtual machine for Azure is very similar to CentOS 6, howe
454454
455455
## Next steps
456456
457-
You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
457+
You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).

articles/virtual-machines/linux/suse-create-upload-vhd.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,7 @@ As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
117117
* Use a cloud-init directive baked into the image that will do this every time the VM is created:
118118

119119
```console
120+
echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
120121
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
121122
#cloud-config
122123
# Generated by Azure cloud image build
@@ -132,7 +133,7 @@ As an alternative to building your own VHD, SUSE also publishes BYOS (Bring Your
132133
filesystem: swap
133134
mounts:
134135
- ["ephemeral0.1", "/mnt"]
135-
- ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
136+
- ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
136137
EOF
137138
```
138139

0 commit comments

Comments
 (0)