Data Engineering ETL Project is a comprehensive project demonstrating data ingestion, ETL processes, and analytical querying using AWS Redshift, Apache Spark, and Sqoop. The project covers the complete data pipeline—from importing data from an RDS source to HDFS using Sqoop, processing data with Spark, to executing analytical queries on an AWS Redshift cluster.
This project is designed to showcase a real-world ETL workflow for a data engineering assignment:
- Data Ingestion: Import data from an RDS (MySQL) database to HDFS using Sqoop.
- ETL Processing: Use Apache Spark for data transformation and loading.
- Analytical Queries: Execute complex analytical queries on an AWS Redshift cluster to derive insights.
The provided documents include detailed Redshift queries, cluster setup screenshots, Spark ETL code, and Sqoop data ingestion commands.
flowchart TD
A[🗄️ RDS - MySQL] --> B[📥 Sqoop Import]
B --> C[📁 HDFS]
C --> D[🔄 Spark ETL Processing]
D --> E[📤 Data Load]
E --> F[AWS Redshift]
F --> G[🔍 Analytical Queries]
DataEngineeringETL/
├── RedshiftQueries.pdf # PDF containing analytical queries for the Redshift cluster
├── RedshiftSetup.pdf # PDF with screenshots and details on setting up the Redshift cluster
├── SparkETLCode.ipynb # Jupyter Notebook with Spark ETL code and transformation logic
├── SqoopDataIngestion.pdf # PDF outlining the Sqoop import commands and HDFS data inspection
└── README.md # Project documentation (this file)
- AWS Account: For setting up Redshift and S3.
- RDS MySQL Instance: Source of data.
- Hadoop Cluster: For HDFS (local or cloud-based).
- Apache Sqoop & Spark: Installed on your data processing cluster.
-
Data Ingestion with Sqoop:
- Use the Sqoop commands detailed in
SqoopDataIngestion.pdf
to import tables from RDS into HDFS. - Verify data import using Hadoop FS commands.
- Use the Sqoop commands detailed in
-
ETL Processing with Spark:
- Open
SparkETLCode.ipynb
in Jupyter Notebook. - Follow the ETL workflow to clean, transform, and load data.
- Open
-
Redshift Cluster Setup:
- Follow the guidelines in
RedshiftSetup.pdf
to create a Redshift cluster and configure databases/tables. - Execute the SQL queries from
RedshiftQueries.pdf
on the AWS Redshift Query Editor.
- Follow the guidelines in
-
Run ETL:
Execute the Spark ETL Notebook (SparkETLCode.ipynb
) to process and prepare data. -
Load & Query Data:
Load the transformed data into Redshift and run analytical queries to generate insights. -
Review Documentation:
Refer to the PDF files for detailed instructions on Redshift setup, query execution, and Sqoop data ingestion.
If you find this project useful, please consider:
- Starring the repository ⭐
- Forking to contribute improvements or customizations
- Following for updates on similar data engineering projects
Your engagement is greatly appreciated and helps boost visibility!
This project is licensed under the MIT License.
- AWS & Azure: For providing robust cloud infrastructure.
- Data Engineering Community: For continuous inspiration and support.
Happy Data Engineering! 🚀🔧