cover image
CommuniTech Recruitment Group

Databricks Technical Lead. Fintech. up to £1000/ Day inside IR35. Greenfield Project. 6 Months rolling contract. Hybrid 3 Days a week in Central London office.

Hybrid

London, United kingdom

Senior

Freelance

25-01-2026

Share this job:

Skills

Leadership Unity SQL Data Governance Data Engineering Apache Spark GitHub CI/CD DevOps Version Control Azure DevOps Test Architecture apache git Azure Agile Analytics Data Science Spark Databricks Kafka GitHub Actions

Job Specifications

Databricks Technical Lead. Fintech. up to £1000/ Day inside IR35. Greenfield Project. 6 Months rolling contract. Hybrid 3 Days a week in Central London office.

My client is a Fintech that is looking for a Databricks Technical Lead.

Team / department

The Data team is responsible for providing business solutions aimed at extracting value from large amounts of data. It covers a broad range of activities such as collecting market data and building related analysis tools, processing of real-time data streams, data governance and data science. The role will provide technical leadership in the adoption of Databricks, helping to assess its key features and provide guidance in its initial implementation in Azure.

Main responsibilities

Lead the Azure Databricks implementation: Act as the technical lead for the proof-of-concept, evaluating Databricks capabilities against business requirements. Help define success criteria, design test scenarios, and deliver recommendations for full-scale adoption.
Design and Implement Lakehouse Architecture: Architect and build a modern data platform on Azure Databricks, leveraging Delta Lake and open standards. Ensure scalability, performance optimisation, and cost efficiency.
Hybrid Data Integration: Develop strategies and implement solutions to integrate on-premises data sources with Azure Databricks. Address connectivity, security, and performance challenges in hybrid environments.
Streaming and Near Real-Time Data Processing: Design and implement streaming pipelines using Databricks Structured Streaming and other available techniques. Evaluate and demonstrate near real-time capabilities for ingestion and transformation.
Data Transformation and Workflow Design: Create robust, scalable ingestion and transformation workflows using Databricks notebooks and Spark SQL. Incorporate observability, logging, and error handling.
Data Lineage and Governance: Implement lineage tracking and governance using Unity Catalog or integrated tools. Ensure compliance with organisational security and regulatory standards. Work in close collaboration with the Data Governance team.
Data Quality and Consistency: Demonstrate how to apply schema enforcement, validation, and error handling across data pipelines to maintain high-quality data.
Collaborate on Data Modelling: Work with analytics and data science teams to design schemas optimised for advanced analytics and AI workloads within Databricks.
Contribute to Architectural Direction: Provide technical leadership and input into the overall data platform architecture, ensuring alignment with strategic goals.
Mentor and Review: Guide team members on Databricks best practices, review code, and promote knowledge sharing.
Participate in Agile Delivery: Engage in agile ceremonies and contribute to iterative delivery of PoC and subsequent implementation phases.

Required Skills and Experience

Azure Databricks Expertise: Proven experience designing and implementing solutions on Azure Databricks, including cluster configuration, workspace management, and optimisation for cost and performance. Ability to leverage Databricks features such as Delta Lake, Unity Catalog, and MLflow for data engineering workflows.
Lakehouse Architecture Design: Hands-on experience building and optimising lakehouse architectures using Databricks. Skilled in designing partitioning strategies, indexing, and compaction for performance at scale.
Hybrid Data Integration: Practical experience in integrating on-premises data sources with Azure-based platforms. Familiarity with secure connectivity patterns, data movement strategies, and performance considerations in hybrid environments.
Streaming Data: Knowledge of implementing real-time pipelines in Databricks using Structured Streaming and integrating with Kafka or Event Hubs.
Data Transformation & Lineage: Advanced skills in creating transformation pipelines using Databricks notebooks and Spark SQL. Experience implementing data lineage and observability within Databricks, leveraging tools such as Unity Catalog or integrating with external lineage solutions.
Distributed Data Processing: Deep understanding of Apache Spark within Databricks, including optimisation techniques for large-scale batch and streaming workloads.
SQL & Delta Lake: Strong SQL skills for data modelling and querying within Databricks, including experience with Delta Lake features like ACID transactions, schema enforcement, and time travel.
Version Control & CI/CD: Familiarity with Git-based workflows and implementing CI/CD for Databricks using tools like Azure DevOps or GitHub Actions.
Cloud Storage & Security: Expertise in Azure Data Lake Storage (ADLS Gen2) and integration with Databricks. Strong understanding of identity management, access control (e.g., Azure RBAC, Unity Catalog), and compliance in cloud environments.

Desirable Skills and Experience

Advanced Databricks Features: Implementation and performance tuning of BI and AI workloads.
Query Engines: Exposur

About the Company

CommuniTech are an exciting name in Tech Recruitment, seamlessly connecting the client & candidate communities to deliver exceptional technical talent to tech-driven companies. Ensuring that together, they will thrive, exceed, and achieve. By striving to intertwine the communities, we get to know our clients and candidates better than ever before. Providing recruitment solutions that deliver an individual experience tailored to your needs. Know more