Skip to content

Commit fbddfc7

Browse files
authored
Removed Storm Contents Phase 2
Removed Storm Contents Phase 2
1 parent 341630f commit fbddfc7

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how extract, transform, and load is used in HDInsight with Ap
44
ms.service: hdinsight
55
ms.topic: how-to
66
ms.custom: hdinsightactive,seoapr2020
7-
ms.date: 04/01/2022
7+
ms.date: 11/17/2022
88
---
99

1010
# Extract, transform, and load (ETL) at scale
@@ -66,7 +66,7 @@ Azure Data Lake Storage is a managed, hyperscale repository for analytics data.
6666

6767
Data is usually ingested into Data Lake Storage through Azure Data Factory. You can also use Data Lake Storage SDKs, the AdlCopy service, Apache DistCp, or Apache Sqoop. The service you choose depends on where the data is. If it's in an existing Hadoop cluster, you might use Apache DistCp, the AdlCopy service, or Azure Data Factory. For data in Azure Blob storage, you might use Azure Data Lake Storage .NET SDK, Azure PowerShell, or Azure Data Factory.
6868

69-
Data Lake Storage is optimized for event ingestion through Azure Event Hubs or Apache Storm.
69+
Data Lake Storage is optimized for event ingestion through Azure Event Hubs.
7070

7171
### Considerations for both storage options
7272

0 commit comments

Comments
 (0)