- Company Name
- Tential Solutions
- Job Title
- Data Engineer
- Job Description
-
Job title: Data Engineer
Role Summary: Design, build, and maintain scalable, cloud‑based data pipelines for a banking and credit analytics environment. Use Python, Java, SQL, Spark, Databricks, and AWS services to deliver reliable, cost‑efficient solutions that support risk, fraud, and portfolio analytics.
Expactations: Deliver robust ETL/ELT workflows that meet performance and reliability targets. Collaborate closely with consulting partners and client stakeholders to translate business requirements into technical designs. Maintain high code quality through testing, version control, and CI/CD practices. Provide ongoing monitoring, troubleshooting, and optimization of production data pipelines.
Key Responsibilities:
• Design and implement scalable ETL/ELT pipelines using Python, Java, and SQL.
• Build and tune Spark jobs (batch and/or streaming) on AWS EMR, leveraging Spark SQL, PySpark, or Scala.
• Create and manage data workflows on AWS (S3, Lambda, Glue, EMR, IAM, CloudWatch).
• Use Databricks to develop, schedule, and monitor notebooks, jobs, and Delta Lake tables.
• Design data models and structures to support banking and credit analytics use cases.
• Monitor pipeline performance, troubleshoot issues, and optimize cost and reliability.
• Collaborate with consultants, data analysts, and data scientists to gather requirements and deliver technical solutions.
• Apply best practices for code quality, testing, version control, and CI/CD.
• Contribute to documentation, coding standards, and reusable components.
Required Skills:
• Proficient in Python and Java for data engineering and ETL/ELT.
• Advanced SQL skills with complex queries and performance tuning.
• Production experience on AWS (EMR, S3, Lambda, Glue, IAM, CloudWatch).
• Hands‑on experience building and optimizing Spark jobs (PySpark, Spark SQL, or Scala).
• Expertise with Databricks (notebooks, clusters, jobs, Delta Lake).
• Solid understanding of data warehousing, data modeling, and structured/semi‑structured data best practices.
• Familiarity with Git, code reviews, branching strategies, and CI/CD pipelines.
• Strong communication skills and ability to work in consulting/client‑facing environments.
Required Education & Certifications:
• Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field.
• Optional certifications: AWS Certified Big Data – Specialty, AWS Certified Data Analytics – Specialty, Databricks Certified Data Engineer.
District of columbia, United states
Remote
01-01-2026