Skip to content

Commit d4fedd7

Browse files
authored
Merge pull request #111350 from MicrosoftDocs/release-ignite-arcadia
Release ignite arcadia
2 parents ef30da7 + 4f8a16a commit d4fedd7

File tree

776 files changed

+17604
-327
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

776 files changed

+17604
-327
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50091,6 +50091,11 @@
5009150091
"redirect_url": "/azure/cognitive-services/speech-service",
5009250092
"redirect_document_id": false
5009350093
},
50094+
{
50095+
"source_path": "articles/hdinsight/spark/azure-synapse-analytics-job-definition.md",
50096+
"redirect_url": "../../synapse-analytics/spark/apache-spark-job-definitions.md",
50097+
"redirect_document_id": true
50098+
},
5009450099
{
5009550100
"source_path": "articles/media-services/latest/access-api-portal.md",
5009650101
"redirect_url": "/azure/media-services/latest/access-api-howto",

.vscode/settings.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,4 @@
6262
"auditd"
6363
],
6464
"git.ignoreLimitWarning": true
65-
}
65+
}

articles/azure-monitor/monitor-reference.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,6 @@ Other solutions are available for monitoring different applications and services
202202
| [SQL health check](insights/sql-assessment.md) | Assess the risk and health of your SQL Server environments. |
203203
| [Wire Data](insights/wire-data.md) | Consolidated network and performance data collected from Windows-connected and Linux-connected computers with the Log Analytics agent. |
204204

205-
206205
## Third party integration
207206

208207
| Solution | Description |

articles/data-factory/author-visually.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ ms.date: 12/19/2019
1414

1515
# Visual authoring in Azure Data Factory
1616

17+
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
18+
1719
The Azure Data Factory user interface experience (UX) lets you visually author and deploy resources for your data factory without having to write any code. You can drag activities to a pipeline canvas, perform test runs, debug iteratively, and deploy and monitor your pipeline runs.
1820

1921
Currently, the Azure Data Factory UX is only supported in Microsoft Edge and Google Chrome.

articles/data-factory/azure-integration-runtime-ip-addresses.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,8 @@ ms.date: 01/06/2020
1515

1616
# Azure Integration Runtime IP addresses
1717

18+
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
19+
1820
The IP addresses that Azure Integration Runtime uses depends on the region where your Azure integration runtime is located. *All* Azure integration runtimes that are in the same region use the same IP address ranges.
1921

2022
> [!IMPORTANT]

articles/data-factory/compare-versions.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,17 @@ ms.author: makromer
1616

1717
---
1818
# Compare Azure Data Factory with Data Factory version 1
19+
20+
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
21+
1922
This article compares Data Factory with Data Factory version 1. For an introduction to Data Factory, see [Introduction to Data Factory](introduction.md).For an introduction to Data Factory version 1, see [Introduction to Azure Data Factory](v1/data-factory-introduction.md).
2023

2124
## Feature comparison
2225
The following table compares the features of Data Factory with the features of Data Factory version 1.
2326

2427
| Feature | Version 1 | Current version |
2528
| ------- | --------- | --------- |
26-
| Datasets | A named view of data that references the data that you want to use in your activities as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Azure Blob storage from which the activity should read the data.<br/><br/>**Availability** defines the processing window slicing model for the dataset (for example, hourly, daily, and so on). | Datasets are the same in the current version. However, you do not need to define **availability** schedules for datasets. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. For more information, see [Triggers](concepts-pipeline-execution-triggers.md#triggers) and [Datasets](concepts-datasets-linked-services.md). |
29+
| Datasets | A named view of data that references the data that you want to use in your activities as inputs and outputs. Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Azure Blob storage from which the activity should read the data.<br/><br/>**Availability** defines the processing window slicing model for the dataset (for example, hourly, daily, and so on). | Datasets are the same in the current version. However, you do not need to define **availability** schedules for datasets. You can define a trigger resource that can schedule pipelines from a clock scheduler paradigm. For more information, see [Triggers](concepts-pipeline-execution-triggers.md#trigger-execution) and [Datasets](concepts-datasets-linked-services.md). |
2730
| Linked services | Linked services are much like connection strings, which define the connection information that's necessary for Data Factory to connect to external resources. | Linked services are the same as in Data Factory V1, but with a new **connectVia** property to utilize the Integration Runtime compute environment of the current version of Data Factory. For more information, see [Integration runtime in Azure Data Factory](concepts-integration-runtime.md) and [Linked service properties for Azure Blob storage](connector-azure-blob-storage.md#linked-service-properties). |
2831
| Pipelines | A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. You use startTime, endTime, and isPaused to schedule and run pipelines. | Pipelines are groups of activities that are performed on data. However, the scheduling of activities in the pipeline has been separated into new trigger resources. You can think of pipelines in the current version of Data Factory more as “workflow units” that you schedule separately via triggers. <br/><br/>Pipelines do not have “windows” of time execution in the current version of Data Factory. The Data Factory V1 concepts of startTime, endTime, and isPaused are no longer present in the current version of Data Factory. For more information, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md) and [Pipelines and activities](concepts-pipelines-activities.md). |
2932
| Activities | Activities define actions to perform on your data within a pipeline. Data movement (copy activity) and data transformation activities (such as Hive, Pig, and MapReduce) are supported. | In the current version of Data Factory, activities still are defined actions within a pipeline. The current version of Data Factory introduces new [control flow activities](concepts-pipelines-activities.md#control-flow-activities). You use these activities in a control flow (looping and branching). Data movement and data transformation activities that were supported in V1 are supported in the current version. You can define transformation activities without using datasets in the current version. |

0 commit comments

Comments
 (0)