cover image
CloudTech Innovations

Databricks Data Engineer

On site

Toronto, Canada

Mid level

Freelance

10-11-2025

Share this job:

Skills

Communication Python Scala Unity SQL Data Governance Data Engineering Apache Spark Encryption CI/CD DevOps Docker Problem-solving Attention to detail Machine Learning apache git Azure AWS Agile Analytics GCP Spark CI/CD Pipelines Databricks Terraform

Job Specifications

Job Title: Data Engineer – Databricks

Location: Onsite – Toronto, Canada

Employment Type: Contract

About the Role

We are seeking an experienced Data Engineer with a strong background in Databricks, Apache Spark, and modern cloud data platforms. The ideal candidate has over 5 years of experience designing, developing, and maintaining scalable data pipelines and lakehouse architectures in enterprise environments. You will work closely with solution architects, analysts, and cross-functional teams to build robust, high-performance data solutions supporting analytics and machine learning workloads.

Key Responsibilities

• Design and implement ETL/ELT pipelines using Databricks and Apache Spark for batch and streaming data.

• Develop and maintain Delta Lake architectures to unify structured and unstructured data.

• Collaborate with data architects, analysts, and data scientists to define and deliver scalable data solutions.

• Implement data governance, access control, and lineage using Unity Catalog, IAM, and encryption standards.

• Integrate Databricks with cloud services on AWS, Azure, or GCP (e.g., S3, ADLS, BigQuery, Glue, Data Factory, or Dataflow).

• Automate workflows using orchestration tools such as Airflow, dbt, or native cloud schedulers.

• Tune Databricks jobs and clusters for performance, scalability, and cost optimization.

• Apply DevOps principles for CI/CD automation in data engineering workflows.

• Participate in Agile ceremonies, providing updates, managing risks, and driving continuous improvement.

Required Qualifications

• 5+ years of professional experience in data engineering or data platform development.

• Hands-on experience with Databricks, Apache Spark, and Delta Lake.

• Experience with at least one major cloud platform — AWS, Azure, or GCP.

• Strong proficiency in Python or Scala for data processing and automation.

• Advanced knowledge of SQL, query performance tuning, and data modeling.

• Experience with data pipeline orchestration tools (Airflow, dbt, Step Functions, or equivalent).

• Understanding of data governance, security, and compliance best practices.

• Excellent communication skills and ability to work onsite in Toronto.

Preferred Skills

• Certifications in Databricks, AWS/Azure/GCP Data Engineering, or Apache Spark.

• Experience with Unity Catalog, MLflow, or data quality frameworks (e.g., Great Expectations).

• Familiarity with Terraform, Docker, or Git-based CI/CD pipelines.

• Prior experience in finance, legal tech, or enterprise data analytics environments.

• Strong analytical and problem-solving mindset with attention to detail.

About the Company

Comprehensive Staffing Solutions for Specialized Engineering Roles: Software Development Engineering: We offer staffing solutions with skilled software engineers proficient in various programming languages and technologies. They are equipped to develop top-tier, scalable, and efficient software solutions that align with your project needs. Site Reliability Engineering (SRE): Our expert SRE staff specialize in ensuring the high availability and reliability of software systems. They excel in continuous integration and deployme... Know more