Skip to content

Commit b9a72ac

Browse files
authored
Merge pull request #72329 from MicrosoftDocs/master
Merge master to live 3:00 AM
2 parents bcdae40 + 1d1699e commit b9a72ac

File tree

92 files changed

+1133
-1002
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

92 files changed

+1133
-1002
lines changed

articles/cosmos-db/create-sql-api-dotnet.md

Lines changed: 106 additions & 106 deletions
Large diffs are not rendered by default.
5.94 KB
Loading
-20.4 KB
Loading
24.2 KB
Loading

articles/data-explorer/ingest-data-no-code.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: orspodek
66
ms.reviewer: jasonh
77
ms.service: data-explorer
88
ms.topic: tutorial
9-
ms.date: 03/14/2019
9+
ms.date: 04/07/2019
1010

1111
# Customer intent: I want to ingest data to Azure Data Explorer without one line of code, so that I can explore and analyze my data by using queries.
1212
---
@@ -206,12 +206,12 @@ To map the activity logs' data to the table, use the following query:
206206

207207
#### Activity log data update policy
208208

209-
1. Create a [function](/azure/kusto/management/functions) that expands the collection of activity log records so that each value in the collection receives a separate row. Use the [`mvexpand`](/azure/kusto/query/mvexpandoperator) operator:
209+
1. Create a [function](/azure/kusto/management/functions) that expands the collection of activity log records so that each value in the collection receives a separate row. Use the [`mv-expand`](/azure/kusto/query/mvexpandoperator) operator:
210210

211211
```kusto
212212
.create function ActivityLogRecordsExpand() {
213213
ActivityLogsRawRecords
214-
| mvexpand events = Records
214+
| mv-expand events = Records
215215
| project
216216
Timestamp = todatetime(events["time"]),
217217
ResourceId = tostring(events["resourceId"]),
@@ -235,11 +235,11 @@ To map the activity logs' data to the table, use the following query:
235235
236236
#### Diagnostic log data update policy
237237
238-
1. Create a [function](/azure/kusto/management/functions) that expands the collection of diagnostic log records so that each value in the collection receives a separate row. Use the [`mvexpand`](/azure/kusto/query/mvexpandoperator) operator:
238+
1. Create a [function](/azure/kusto/management/functions) that expands the collection of diagnostic log records so that each value in the collection receives a separate row. Use the [`mv-expand`](/azure/kusto/query/mvexpandoperator) operator:
239239
```kusto
240240
.create function DiagnosticLogRecordsExpand() {
241241
DiagnosticLogsRawRecords
242-
| mvexpand events = Records
242+
| mv-expand events = Records
243243
| project
244244
Timestamp = todatetime(events["time"]),
245245
ResourceId = tostring(events["resourceId"]),

articles/data-explorer/time-series-analysis.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: orspodek
66
ms.reviewer: mblythe
77
ms.service: data-explorer
88
ms.topic: conceptual
9-
ms.date: 10/30/2018
9+
ms.date: 04/07/2019
1010
---
1111

1212
# Time series analysis in Azure Data Explorer
@@ -132,7 +132,7 @@ demo_series3
132132
```kusto
133133
demo_series3
134134
| project (periods, scores) = series_periods_detect(num, 0., 14d/2h, 2) //to detect the periods in the time series
135-
| mvexpand periods, scores
135+
| mv-expand periods, scores
136136
| extend days=2h*todouble(periods)/1d
137137
```
138138

articles/data-explorer/write-queries.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: orspodek
66
ms.reviewer: mblythe
77
ms.service: data-explorer
88
ms.topic: conceptual
9-
ms.date: 09/24/2018
9+
ms.date: 04/07/2019
1010
---
1111

1212
# Write queries for Azure Data Explorer
@@ -364,7 +364,7 @@ The following query returns data for the last 12 hours.
364364
//The first two lines generate sample data, and the last line uses
365365
//the ago() operator to get records for last 12 hours.
366366
print TimeStamp= range(now(-5d), now(), 1h), SomeCounter = range(1,121)
367-
| mvexpand TimeStamp, SomeCounter
367+
| mv-expand TimeStamp, SomeCounter
368368
| where TimeStamp > ago(12h)
369369
```
370370

@@ -614,12 +614,12 @@ StormEvents
614614
| project State, FloodReports
615615
```
616616

617-
### mvexpand
617+
### mv-expand
618618

619-
[**mvexpand**](https://docs.microsoft.com/azure/kusto/query/mvexpandoperator):
619+
[**mv-expand**](https://docs.microsoft.com/azure/kusto/query/mvexpandoperator):
620620
Expands multi-value collection(s) from a dynamic-typed column so that each value in the collection gets a separate row. All the other columns in an expanded row are duplicated. It's the opposite of makelist.
621621

622-
The following query generates sample data by creating a set and then using it to demonstrate the **mvexpand** capabilities.
622+
The following query generates sample data by creating a set and then using it to demonstrate the **mv-expand** capabilities.
623623

624624
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAEAFWOQQ6CQAxF9yTcoWGliTcws1MPIFygyk9EKTPpVBTj4Z2BjSz%2f738v7WF06r1vD2xcp%2bCoNq9yHDFYLIsvvW5Q0JybKYCco2omqnyNTxHW7oPFckbwajFZhB%2bIsE1trNZ0gi1dpuRmQ%2baC%2bjuuthS7Fbwvi%2f%2bP8lpGvAMP7Wr3A6BceSu7AAAA)**\]**
625625

@@ -629,7 +629,7 @@ let FloodDataSet = StormEvents
629629
| summarize FloodReports = makeset(StartTime) by State
630630
| project State, FloodReports;
631631
FloodDataSet
632-
| mvexpand FloodReports
632+
| mv-expand FloodReports
633633
```
634634

635635
### percentiles()
@@ -730,7 +730,7 @@ StormEvents
730730
| extend row_number = row_number()
731731
```
732732

733-
The row set is also considered as serialized if it's a result of: **sort**, **top**, or **range** operators, optionally followed by **project**, **project-away**, **extend**, **where**, **parse**, **mvexpand**, or **take** operators.
733+
The row set is also considered as serialized if it's a result of: **sort**, **top**, or **range** operators, optionally followed by **project**, **project-away**, **extend**, **where**, **parse**, **mv-expand**, or **take** operators.
734734

735735
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAEAAsuyS%2fKdS1LzSsp5uWqUSguzc1NLMqsSlVIzi%2fNK9HQVEiqVAguSSxJBcvmF5XABRQSi5NBgqkVJal5KQpF%2beXxeaW5SalFCrZIHA1NAEGimf5iAAAA)**\]**
736736

@@ -807,7 +807,7 @@ range _day from _start to _end step 1d
807807
| extend d = tolong((_day - _start)/1d)
808808
| extend r = rand()+1
809809
| extend _users=range(tolong(d*50*r), tolong(d*50*r+100*r-1), 1)
810-
| mvexpand id=_users to typeof(long) limit 1000000
810+
| mv-expand id=_users to typeof(long) limit 1000000
811811
// Calculate DAU/WAU ratio
812812
| evaluate activity_engagement(['id'], _day, _start, _end, 1d, 7d)
813813
| project _day, Dau_Wau=activity_ratio*100
@@ -834,7 +834,7 @@ range _day from _start to _end step 1d
834834
| extend d = tolong((_day - _start)/1d)
835835
| extend r = rand()+1
836836
| extend _users=range(tolong(d*50*r), tolong(d*50*r+200*r-1), 1)
837-
| mvexpand id=_users to typeof(long) limit 1000000
837+
| mv-expand id=_users to typeof(long) limit 1000000
838838
| where _day > datetime(2017-01-02)
839839
| project _day, id
840840
// Calculate weekly retention rate
@@ -860,7 +860,7 @@ range Day from _start to _end step 1d
860860
| extend d = tolong((Day - _start)/1d)
861861
| extend r = rand()+1
862862
| extend _users=range(tolong(d*50*r), tolong(d*50*r+200*r-1), 1)
863-
| mvexpand id=_users to typeof(long) limit 1000000
863+
| mv-expand id=_users to typeof(long) limit 1000000
864864
// Take only the first week cohort (last parameter)
865865
| evaluate new_activity_metrics(['id'], Day, _start, _end, 7d, _start)
866866
| project from_Day, to_Day, retention_rate, churn_rate

articles/data-factory/concepts-integration-runtime.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Integration runtime in Azure Data Factory | Microsoft Docs
33
description: 'Learn about integration runtime in Azure Data Factory.'
44
services: data-factory
55
documentationcenter: ''
6-
author: linda33wj
6+
author: nabhishek
77
manager: craigg
88
ms.reviewer: douglasl
99

@@ -13,7 +13,7 @@ ms.tgt_pltfrm: na
1313

1414
ms.topic: conceptual
1515
ms.date: 06/14/2018
16-
ms.author: jingwang
16+
ms.author: abnarain
1717

1818
---
1919

articles/data-factory/connector-azure-data-lake-storage.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ To use service principal authentication, follow these steps:
100100
- **As sink**, in Storage Explorer, grant at least **Write + Execute** permission to create child items in the folder. Alternatively, in Access control (IAM), grant at least **Storage Blob Data Contributor** role.
101101

102102
>[!NOTE]
103-
>To list folders starting from the account level, you need to set the permission of the service principal being granted to **storage account with "Execute" permission** or permission on IAM. This is true when you use the:
103+
>To list folders starting from the account level or to test connection, you need to set the permission of the service principal being granted to **storage account with "Execute" permission in IAM**. This is true when you use the:
104104
>- **Copy Data Tool** to author copy pipeline.
105105
>- **Data Factory UI** to test connection and navigating folders during authoring.
106106
>If you have concern on granting permission at account level, you can skip test connection and input path manually during authoring. Copy activity will still work as long as the service principal is granted with proper permission at the files to be copied.
@@ -154,7 +154,7 @@ To use managed identities for Azure resources authentication, follow these steps
154154
- **As sink**, in Storage Explorer, grant at least **Write + Execute** permission to create child items in the folder. Alternatively, in Access control (IAM), grant at least **Storage Blob Data Contributor** role.
155155

156156
>[!NOTE]
157-
>To list folders starting from the account level, you need to set the permission of the managed identity being granted to **storage account with "Execute" permission** or permission on IAM. This is true when you use the:
157+
>To list folders starting from the account level or to test connection, you need to set the permission of the managed identity being granted to **storage account with "Execute" permission in IAM**. This is true when you use the:
158158
>- **Copy Data Tool** to author copy pipeline.
159159
>- **Data Factory UI** to test connection and navigating folders during authoring.
160160
>If you have concern on granting permission at account level, you can skip test connection and input path manually during authoring. Copy activity will still work as long as the managed identity is granted with proper permission at the files to be copied.

articles/data-factory/copy-activity-fault-tolerance.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Fault tolerance of copy activity in Azure Data Factory | Microsoft Docs
33
description: 'Learn about how to add fault tolerance to copy activity in Azure Data Factory by skipping the incompatible rows.'
44
services: data-factory
55
documentationcenter: ''
6-
author: linda33wj
6+
author: dearandyxu
77
manager: craigg
88
ms.reviewer: douglasl
99

@@ -13,7 +13,7 @@ ms.tgt_pltfrm: na
1313

1414
ms.topic: conceptual
1515
ms.date: 10/26/2018
16-
ms.author: jingwang
16+
ms.author: yexu
1717

1818
---
1919
# Fault tolerance of copy activity in Azure Data Factory

0 commit comments

Comments
 (0)