Job Specifications
GCP Data Engineer
We are seeking experienced GCP Data Engineers to join their dynamic team on a freelance basis to support critical data migration and infrastructure projects.
You'll collaborate closely with a seasoned team (most with 5+ years in GCP) to design, deploy, and optimise data pipelines that handle large-scale data processing.
Key Responsibilities
Design, build, and maintain ETL/ELT pipelines for data ingestion, transformation, and loading into cloud data warehouses, processing large volumes of data efficiently.
Implement and manage data warehousing solutions, ensuring high performance, scalability, and reliability in production environments.
Deploy infrastructure as code (IaC) using Terraform to provision and manage GCP resources.
Develop and orchestrate workflows using Cloud Composer or Apache Airflow for scheduling and automation.
Leverage Apache Beam/Dataflow or Spark/Dataproc for distributed data processing, including batch, micro-batch, and Real Time streaming architectures.
Author advanced SQL queries and optimise relational database interactions, incorporating change data capture (CDC) where applicable.
Collaborate on data migrations, ensuring seamless transitions with a focus on data quality, testing, and modern DataOps practices (eg, pipeline deployments, quality engineering).
Integrate with the broader data engineering ecosystem, recommending and adopting tools as needed to enhance efficiency.
Communicate effectively with internal stakeholders and external partners to align on requirements and deliver high-impact solutions.
Required Qualifications
5+ years of experience as a Data Engineer or in a comparable engineering capacity, with at least 3 years hands-on in GCP (production deployments required; team standard is 2+ years minimum, but we prioritise depth).
Strong proficiency in Python (other languages like Java or Scala are a bonus).
Advanced SQL knowledge, including query authoring, optimisation, and experience with relational databases.
Proven experience with cloud data warehouses, specifically BigQuery.
Hands-on production experience with GCP native data tools, including:
Terraform for IaC.
Cloud Composer or Apache Airflow for orchestration and scheduling.
Apache Beam/Dataflow for data processing.
Spark/Dataproc for large-scale analytics.
Demonstrated ability to build and deploy ELT/ETL pipelines handling large datasets in production GCP environments.
Familiarity with modern DataOps practices, including testing, CI/CD for pipelines, and IaC.
Experience with batch, micro-batch, and Real Time streaming data architectures.
Excellent internal and customer-facing communication skills.
Excellent communication skills in English, need to be available to start within next 5 weeks.
This role is outside iR35, if you have the required skills and interested to apply please send your CV now for immediate consideration.
About the Company
Enabling Transformation. Igniting Growth. Orcan delivers end-to-end AWS projects that help businesses move forward with clarity and control. From cloud migration and optimisation to reimagining infrastructure and deploying hybrid architecture, we build scalable systems and provide expert consultancy to unlock real business value. We're not here to overpromise or slow your progress. We deliver reliably, consistently, and with a focus on what matters most, whether that's reducing operational costs, improving performance at sca...
Know more