Incremental batch data pipeline using AWS S3 and AWS Glue with date-based partitioning.
-
Updated
Jan 13, 2026 - Python
Incremental batch data pipeline using AWS S3 and AWS Glue with date-based partitioning.
End-to-end Metadata-Driven Data Engineering framework built on Azure. Features dynamic SQL/REST API ingestion with range pagination, automated schema mapping, and event-driven orchestration. Implements robust CI/CD via GitHub Actions/YAML and automated failure alerting with Logic Apps. Optimized for scalability and DE best practices.
End-to-end data engineering pipeline using Databricks, Amazon S3, Delta Lake and Unity Catalog, with full and incremental loads.
This project implements a comprehensive event-driven data pipeline for e-commerce transactional data processing using Databricks, PySpark, and Delta Lake. The pipeline handles multiple data sources with advanced data engineering patterns including SCD2 (Slowly Changing Dimensions), data validation, enrichment, and automated archiving.
Add a description, image, and links to the incremental-load topic page so that developers can more easily learn about it.
To associate your repository with the incremental-load topic, visit your repo's landing page and select "manage topics."