cover image
Gravity IT Resources

Data Engineer

Hybrid

Charlotte metro, United states

$ 75 /hour

Mid level

Freelance

22-01-2026

Share this job:

Skills

SQL Data Warehousing SAP Azure Data Factory Decision-making Architecture power bi Azure Snowflake Databricks PySpark ETL Processes

Job Specifications

Contract Senior Data Engineer

Location: Hybrid (Charlotte, NC)

Duration: 6 months

Team Size: 7 (6 staff + lead)

Overview

We are seeking a Senior Data Engineer to support the stabilization and optimization of our data warehouse. This is a hands-on, contract role with approximately 50% coding and 50% design/consulting responsibilities. The ideal candidate will have strong experience in Databricks, Snowflake, and SAP ECC, with a background in supply-chain or manufacturing data preferred.

Primary goals:

Stabilize the bronze (raw extract) layer of the data warehouse.
Optimize silver/gold medallion layers for performance and reliability.
Reduce overnight ETL batch lag (current window: midnight → ~7 AM).
Consult on pipeline design and recommend efficiency improvements.

Work model: Hybrid, approximately 3 days per week onsite

Key Responsibilities

Participate in a workshop to clarify detailed scope and stabilization priorities.
Collaborate with team members and stakeholders to design and implement efficient data pipelines.
Build structurally sound data for the silver-layer warehouse using SQL and PySpark.
Optimize ETL processes to enable near-real-time operational visibility.
Support onboarding and knowledge transfer to other engineers.

Technical priorities:

Databricks (notebooks, PySpark, SQL) – primary focus.
Snowflake – data warehousing and optimization.
SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
Azure Data Factory – extraction pipelines.
Power BI – dashboards and reporting; SSIS/SSRS not required.

Technical Stack & Architecture

ETL Extraction: Azure Data Factory from SAP HANA
Transformation: Databricks notebooks with PySpark/SQL
Data Storage: Snowflake
Consumption/Reporting: Power BI
Batch Window: 12:00–12:30 AM → ~7:00 AM

Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making. Candidate will help redesign architecture for faster, more reliable processing.

Candidate Requirements

Experience: 5–10 years as a Data Engineer or similar role. No less than 5.
Strong hands-on experience in SQL, PySpark, and Databricks.
Familiarity with Snowflake and SAP ECC.
Background in supply-chain or manufacturing data preferred.
Proven ability to consult on data pipeline design and performance optimization.
Capable of working independently and collaboratively in a hybrid/remote setup.

Interview Process

Virtual one-on-one (30 min, with camera) – hiring manager.
Cultural/fit interview (30 min, onsite if local).
Panel interview (technical deep dive, same day; 1–1.5 hrs total).

Total candidate time: 1.5–2 hours

About the Company

Gravity IT Resources provides the consulting expertise and IT talent that powers digital transformation. We employ consistent, innovative strategies across multiple practice areas to help our clients leverage technology to drive bottom line results. We offer contract/staff augmentation, contract-to-hire and direct hire/direct placement staffing services of IT professionals. These services are tailored to meet the specific needs of several different market segments including Commercial, Healthcare and Government. We have sp... Know more