Data Engineer with experience designing and building scalable data pipelines and modern data architectures.
I have a solid understanding of Big Data processing, working with high-performance technologies like Apache Spark, Delta Lake, Parquet, Apache Airflow and DuckDB to process millions of rows efficiently.
I’ve worked extensively with Databricks as a unified platform for end-to-end data management, with solid hands-on experience across its ecosystem, including Unity Catalog, Delta Lake, Auto Loader, Asset Bundles, Lakeflow Declarative Pipelines, and Lakeflow Declarative Jobs.
