Skip to content

Commit 04d2dec

Browse files
authored
Merge pull request #163081 from MicrosoftDocs/master
Merge master to live, Sunday 4 PM
2 parents 5163ebd + 81a5968 commit 04d2dec

File tree

5 files changed

+21
-14
lines changed

5 files changed

+21
-14
lines changed

articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ When a user signs up for a Microsoft cloud service, a new Azure AD tenant is
2828

2929
All of your users have a single *home* directory for authentication. Your users can also be guests in other directories. You can see both the home and guest directories for each user in Azure AD.
3030

31+
:::image type="content" source="media/active-directory-how-subscriptions-associated-directory/trust-relationship-azure-ad.png" alt-text="Screenshot that shows the trust relationship between Azure subscriptions and Azure active directories.":::
32+
3133
> [!Important]
3234
> When you associate a subscription with a different directory, users that have roles assigned using [Azure role-based access control](../../role-based-access-control/role-assignments-portal.md) lose their access. Classic subscription administrators, including Service Administrator and Co-Administrators, also lose access.
3335
>

articles/azure-monitor/alerts/alerts-common-schema-definitions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ Any alert instance describes the resource that was affected and the cause of the
174174
### Log alerts
175175

176176
> [!NOTE]
177-
> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludeSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
177+
> For log alerts that have a custom email subject and/or JSON payload defined, enabling the common schema reverts email subject and/or payload schema to the one described as follows. This means that if you want to have a custom JSON payload defined, the webhook cannot use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. Search results aren't embedded in the log alerts payload if they cause the alert size to cross this threshold. You can determine this by checking the flag `IncludedSearchResults`. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get).
178178
179179
#### `monitoringService` = `Log Analytics`
180180

@@ -246,7 +246,7 @@ Any alert instance describes the resource that was affected and the cause of the
246246
]
247247
}
248248
],
249-
"IncludeSearchResults": "True",
249+
"IncludedSearchResults": "True",
250250
"AlertType": "Metric measurement"
251251
}
252252
}
@@ -318,7 +318,7 @@ Any alert instance describes the resource that was affected and the cause of the
318318
}
319319
]
320320
},
321-
"IncludeSearchResults": "True",
321+
"IncludedSearchResults": "True",
322322
"AlertType": "Metric measurement"
323323
}
324324
}

articles/azure-monitor/logs/monitor-workspace.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Ingestion operations are issues that occurred during data ingestion including no
4949

5050

5151
#### Operation: Data collection stopped
52-
Data collection stopped due to reaching the daily limit.
52+
"Data collection stopped due to daily limit of free data reached. Ingestion status = OverQuota"
5353

5454
In the past 7 days, logs collection reached the daily set limit. The limit is set either as the workspace is set to "free tier", or daily collection limit was configured for this workspace.
5555
Note, after reaching the set limit, your data collection will automatically stop for the day and will resume only during the next collection day.
@@ -63,9 +63,7 @@ Or, you can decide to ([Manage your maximum daily data volume](./manage-cost-sto
6363
* Data collection rate is calculated per day, and will reset at the start of the next day, you can also monitor collection resume event by [Create an alert](./manage-cost-storage.md#alert-when-daily-cap-reached) on "Data collection resumed" Operation event.
6464

6565
#### Operation: Ingestion rate
66-
Ingestion rate limit approaching\passed the limit.
67-
68-
Your ingestion rate has passed the 80%; at this point there is not issue. Note, data collected exceeding the threshold will be dropped. </br>
66+
"The data ingestion volume rate crossed the threshold in your workspace: {0:0.00} MB per one minute and data has been dropped."
6967

7068
Recommended Actions:
7169
* Check _LogOperation table for ingestion rate event
@@ -81,15 +79,15 @@ For further information: </br>
8179

8280

8381
#### Operation: Maximum table column count
84-
Custom fields count have reached the limit.
82+
"Data of type \<**table name**\> was dropped because number of fields \<**new fields count**\> is above the limit of \<**current field count limit**\> custom fields per data type."
8583

8684
Recommended Actions:
8785
For custom tables, you can move to [Parsing the data](./parse-text.md) in queries.
8886

8987
#### Operation: Field content validation
90-
One of the fields of the data being ingested had more than 32 Kb in size, so it got truncated.
88+
"The following fields' values \<**field name**\> of type \<**table name**\> have been trimmed to the max allowed size, \<**field size limit**\> bytes. Please adjust your input accordingly."
9189

92-
Log Analytics limits ingested fields size to 32 Kb, larger size fields will be trimmed to 32 Kb. We don’t recommend sending fields larger than 32 Kb as the trim process might remove important information.
90+
Field larger then the limit size was proccessed by Azure logs, the field was trimed to the allowed field limit. We don’t recommend sending fields larger than the allowed limit as this will resualt in data loss.
9391

9492
Recommended Actions:
9593
Check the source of the affected data type:
@@ -100,6 +98,8 @@ Check the source of the affected data type:
10098

10199
### Data collection
102100
#### Operation: Azure Activity Log collection
101+
"Access to the subscription was lost. Ensure that the \<**subscription id**\> subscription is in the \<**tenant id**\> Azure Active Directory tenant. If the subscription is transferred to another tenant, there is no impact to the services, but information for the tenant could take up to an hour to propagate. '"
102+
103103
Description: In some situations, like moving a subscription to a different tenant, the Azure Activity logs might stop flowing in into the workspace. In those situations, we need to reconnect the subscription following the process described in this article.
104104

105105
Recommended Actions:
@@ -111,6 +111,8 @@ Recommended Actions:
111111

112112
### Agent
113113
#### Operation: Linux Agent
114+
"Two successive configuration applications from OMS Settings failed"
115+
114116
Config settings on the portal have changed.
115117

116118
Recommended Action

articles/key-vault/managed-hsm/private-link.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,10 @@ ms.custom: devx-track-azurecli
1111

1212
---
1313

14-
# Integrate Managed HSM with Azure Private Link
14+
# Integrate Managed HSM with Azure Private Link (preview)
15+
16+
>[!NOTE]
17+
> Azure private endpoints feature for Managed HSM is currently available as **a preview** in following regions: **UK South, Europe West, Canada Central, Australia Central**, and **Asia East**. It will be available in all the [other regions](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault&regions=all) in next few days.
1518
1619
Azure Private Link Service enables you to access Azure Services (for example, Managed HSM, Azure Storage, and Azure Cosmos DB etc.) and Azure hosted customer/partner services over a Private Endpoint in your virtual network.
1720

articles/synapse-analytics/sql/develop-tables-statistics.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ Another option you have is to specify the sample size as a percent:
199199
```sql
200200
CREATE STATISTICS col1_stats
201201
ON dbo.table1 (col1)
202-
WITH SAMPLE = 50 PERCENT;
202+
WITH SAMPLE 50 PERCENT;
203203
```
204204

205205
#### Create single-column statistics on only some of the rows
@@ -227,7 +227,7 @@ You can also combine the options together. The following example creates a filte
227227
CREATE STATISTICS stats_col1
228228
ON table1 (col1)
229229
WHERE col1 > '2000101' AND col1 < '20001231'
230-
WITH SAMPLE = 50 PERCENT;
230+
WITH SAMPLE 50 PERCENT;
231231
```
232232

233233
For the full reference, see [CREATE STATISTICS](/sql/t-sql/statements/create-statistics-transact-sql?view=azure-sqldw-latest&preserve-view=true).
@@ -245,7 +245,7 @@ In this example, the histogram is on *product\_category*. Cross-column statistic
245245
CREATE STATISTICS stats_2cols
246246
ON table1 (product_category, product_sub_category)
247247
WHERE product_category > '2000101' AND product_category < '20001231'
248-
WITH SAMPLE = 50 PERCENT;
248+
WITH SAMPLE 50 PERCENT;
249249
```
250250

251251
Because a correlation exists between *product\_category* and *product\_sub\_category*, a multi-column statistics object can be useful if these columns are accessed at the same time.

0 commit comments

Comments
 (0)