cover image
Pontoon Solutions

Data Pipeline Engineer (On prem to AWS migration)

Hybrid

Bristol, United kingdom

£ 750 /day

Freelance

20-11-2025

Share this job:

Skills

Communication Adaptability Python SQL Data Warehousing Data Governance Data Engineering Apache Spark DevOps Version Control Agile methodologies Decision-making Research apache git Azure AWS cloud platforms Agile Analytics GCP Data Science Spark Kafka Terraform Infrastructure as Code

Job Specifications

Data Pipeline Engineer (On prem to AWS migration)

Utilities

Predominantly remote: occasional travel to Bristol

6 months+

£700 - £750 per day

In short: We require 3 Data Engineers to join a large on prem to AWS migration programme in replicating data pipelines in readiness for a lift and shift to the cloud.

Essential: AWS and Redshift

In full:

Role purpose

Reporting to the Lead Data Engineer, the Data Engineer is responsible for designing and maintaining scalable data pipelines, ensuring data availability, quality, and performance to support analytics and operational decision-making. They contribute to the data engineering roadmap through research, technical vision, business alignment, and prioritisation.

This role is also expected to be a subject matter expert in data engineering, fostering collaboration, continuous improvement, and adaptability within the organisation. They act as a mentor and enabler of best practices, advocating for automation, modularity, and an agile approach to data engineering. A person who advocates for a culture of agility, encouraging all to embrace Agile values and practices.

The role also has accountability to deputise for their line manager (whenever necessary) and is expected to support the product owner community while also driving a positive culture (primarily through role modelling) across the Technology department and wider business.

Key accountabilities

Design, develop, and maintain scalable, secure, and efficient data pipelines, ensuring data is accessible, high-quality, and optimised for analytical and operational use.
Contribute to the data engineering roadmap, ensuring solutions align with business priorities, technical strategy, and long-term sustainability.
Optimise data workflows and infrastructure, leveraging automation and best practices to improve performance, cost efficiency, and scalability.
Collaborate with Data Science, Insight, and Governance teams to support data-driven decision-making, ensuring seamless integration of data across the organisation.
Implement and uphold data governance, security, and compliance standards, ensuring adherence to regulatory and organisational best practices.
Identify and mitigate risks, issues, and dependencies, ensuring continuous improvement in data engineering processes and system reliability, in line with the company risk framework.
Ensure quality is maintained throughout the data engineering lifecycle, delivering robust, scalable, and cost-effective solutions on time and within budget.
Monitor and drive data pipeline lifecycle, from design and implementation through to optimisation and post-deployment performance analysis.
Support operational resilience, ensuring data solutions are maintainable, supportable, and aligned with architectural principles.
Engage with third-party vendors where required, ensuring external contributions align with technical requirements, project timelines, and quality expectations.
Continuously assess emerging technologies and industry trends, identifying opportunities to enhance data engineering capabilities.
Document and maintain clear technical processes, facilitating knowledge sharing and operational continuity within the data engineering function.

Knowledge and experience required

Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical audiences.
Experience working with large-scale data processing and distributed computing frameworks.
Knowledge of cloud platforms such as AWS, Azure, or GCP, with hands-on experience in cloud-based data services.
Proficiency in SQL and Python for data manipulation and transformation.
Experience with modern data engineering tools, including Apache Spark, Kafka, and Airflow.
Strong understanding of data modelling, schema design, and data warehousing concepts.
Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001).
Hands-on experience with version control systems (e.g., Git) and infrastructure as code (e.g., Terraform, CloudFormation).
Understanding of Agile methodologies and DevOps practices for data engineering

Please be advised if you haven't heard from us within 48 hours then unfortunately your application has not been successful on this occasion, we may however keep your details on file for any suitable future vacancies and contact you accordingly. Pontoon is an employment consultancy and operates as an equal opportunities employer.

We use generative AI tools to support our candidate screening process. This helps us ensure a fair, consistent, and efficient experience for all applicants. Rest assured, all final decisions are made by our hiring team, and your application will be reviewed with care and attention.

About the Company

At Pontoon, we go beyond the traditional to build future-fit workforces for our clients. Backed by the Adecco Group, Pontoon is the preferred architect of workforce solutions that will transform the working world. We deliver MSP, RPO, Services Procurement, Direct Sourcing, Talent Consulting, and Total Talent solutions. We help our customers design their workforces to be fit for the future and exceed their business objectives. We are a people-first organisation, committed to co-creating innovative and advanced solutions f... Know more