Job Specifications
Job Title: SC Cleared DevOps Engineer (Azure)
Contract Type: 12-month contract
Day Rate: Up to £400 per day (Inside IR35)
Location: Remote or hybrid (as agreed)
Start Date: January 2026
Clearance Required: Active SC Clearance (mandatory)
We are seeking an experienced SC Cleared DevOps Engineer with strong Databricks platform experience to design, build, deploy, and operate large-scale data and analytics solutions on the Databricks Data Intelligence Platform within Azure.
This role focuses on automation, CI/CD, infrastructure reliability, security, and cost optimisation, while supporting high-performing batch and streaming workloads built on PySpark and Delta Lake. Client information remains confidential.
Required Skills & Experience
Proven experience as a DevOps Engineer on Azure
Strong hands-on experience with the Databricks Data Intelligence Platform
Experience building and maintaining CI/CD pipelines for cloud and data platforms
Solid understanding of Spark, PySpark, and Delta Lake from a platform and operational perspective
Experience with infrastructure-as-code (eg Terraform or equivalent)
Azure experience across ADLS Gen2, Key Vault, managed identities, and serverless services
Strong troubleshooting skills in distributed, cloud-based environments
Platform Engineering & DevOps
Design, build, and maintain CI/CD pipelines for Databricks code, jobs, and configuration across environments
Automate provisioning and configuration of Databricks and Azure infrastructure using infrastructure-as-code
Standardise workspace configuration, cluster policies, secrets, libraries, and access controls
Implement monitoring, logging, and alerting for platform health, job reliability, and pipeline performance
Drive cost optimisation and FinOps practices through usage analysis and workload benchmarking
Support production operations, including incident management, root-cause analysis, and runbooks
Databricks & Data Platform Support
Build and orchestrate Databricks pipelines using Notebooks, Jobs, and Workflows
Optimise Spark and Delta Lake workloads through cluster tuning, autoscaling, adaptive execution, and caching
Support development of PySpark-based ETL and streaming workloads
Manage Delta Lake tables, including schema evolution, ACID compliance, and time travel
Implement data governance, lineage, and access controls using Unity Catalog
Azure Integration & Security
Integrate Databricks with Azure Data Lake Storage Gen2, Key Vault, and serverless Azure services
Enforce security best practices using managed identities, RBAC, and secrets management
Support secure, compliant deployments aligned with public sector security standards
Collaboration & Documentation
Collaborate with cloud architects, data engineers, and analysts on end-to-end solution design
Maintain clear technical documentation covering architecture, CI/CD, monitoring, and governance
Contribute to platform standards, reusable templates, and DevOps best practices
Preferred Qualifications
Experience supporting multiple Databricks workspaces and governed Unity Catalogs
Knowledge of Azure analytics services such as Synapse or Power BI
Experience implementing FinOps/cost governance in cloud environments
Background working in regulated or public sector environments
Strong communication and cross-functional collaboration skills
About the Company
Since 2004, The Montash Group have worked to scale leading tech teams across the globe. Offering four unique service areas, the Montash Group are your specialists in niche tech requirements. We operate industry agnostic while supporting organizations of all sizes. Montash's core offering is local sourcing and placing freelance and/or permanent specialists for your tech teams. Operating on a remote-first approach, Remobi augments existing or new tech teams nearshore and our in- house solution teams provide consultation and E2...
Know more