Skip to content
View fran-cornachione's full-sized avatar

Block or report fran-cornachione

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
fran-cornachione/README.md

Hi, I'm Francesco

Data Engineer with experience designing and building scalable data pipelines and modern data architectures.

I have a solid understanding of Big Data processing, working with high-performance technologies like Apache Spark, Delta Lake, Parquet, Apache Airflow and DuckDB to process millions of rows efficiently.

I’ve worked extensively with Databricks as a unified platform for end-to-end data management, with solid hands-on experience across its ecosystem, including Unity Catalog, Delta Lake, Auto Loader, Asset Bundles, Lakeflow Declarative Pipelines, and Lakeflow Declarative Jobs.


Tech Stack

Databases

Postgres
MySQL
DuckDB

Cloud & Big Data

AWS S3
AWS Glue
Apache Spark | PySpark

Programming & Libraries

Python
Pandas
Polars

BI & Visualization

Power BI
Looker

Version Control

Git
GitHub


Get in Touch

Popular repositories Loading

  1. DuckDB-ETL DuckDB-ETL Public

    An ETL process made with SQL (DuckDB) and Python. Extracts data from .csv files, creates tables from those files,

    Jupyter Notebook 1

  2. Domestic-Violence-in-Argentina Domestic-Violence-in-Argentina Public

    This project explores domestic violence data in Argentina, focusing on victim and perpetrator characteristics such as age, gender, and relationship, along with emergency call times. Using data visu…

  3. fran-cornachione fran-cornachione Public

  4. SpotifyETL SpotifyETL Public

    This project extracts data from the Spotify API, performs the necessary transformations, and stores the results as CSV files in a destination folder. Finally, it generates interactive dashboards us…

    Python

  5. salesforce_ingestion_pipeline_databricks salesforce_ingestion_pipeline_databricks Public

    Full workflow built in Databricks to ingest data incrementally from Salesforce using Databricks Asset Bundles (DAB)

  6. E2E_Databricks E2E_Databricks Public

    An end to end project made 100% in Databricks.