cover image
iPivot

AWS Data engineer with Databricks

Hybrid

Princeton, United states

Mid level

Freelance

15-12-2025

Share this job:

Skills

Python Unity SQL Data Engineering CI/CD Monitoring Azure Data Factory Architecture Azure AWS cloud platforms Analytics GCP CI/CD Pipelines Databricks PySpark Kafka Terraform

Job Specifications

AWS Data Engineer with Databricks

Princeton, NJ – Hybrid

Duration: Long Term

Due to project requirements, this opportunity is restricted to U.S. citizens

Key Responsibilities

Design, develop, and optimize scalable data pipelines using Databricks, PySpark, and Delta Lake for batch and real-time processing.
Implement ELT processes, data quality checks, monitoring, and governance using tools like Unity Catalog, ensuring compliance and performance.
Collaborate with data scientists, analysts, and stakeholders to integrate data from diverse sources and support analytics/ML workflows.
Mentor junior engineers, lead cloud migrations, and manage CI/CD pipelines with IaC tools like Terraform.

Required Skills and Qualifications

Bachelor's in Computer Science or related field, with 5+ years in data engineering including strong Databricks experience.
Proficiency in PySpark, Python, SQL, Azure Data Factory, Kafka for streaming, and data modeling (e.g., medallion architecture).
Hands-on with cloud platforms (Azure/AWS/GCP), ETL/ELT, data lakes/warehouses, and performance optimization

About the Company

iPivot | Pioneering AI Innovation | Your Trusted Technology Partner | Delivering Proven Excellence iPivot has been at the forefront of technology for over 15 years. A MWBE-certified company. It is a fast-growing IT services and consulting firm focusing on delivering quality service for its clients. Today, the firm provides cloud services, technology consulting, engineering services, and staff augmentation to leading Fortune 500 corporations across the United States. Since 2010, iPivot has enjoyed a loyal following of clien... Know more