cover image
team.blue

Data Engineer

Remote

Paris, France

Senior

Full Time

10-11-2025

Share this job:

Skills

Communication Leadership Unity SQL Data Governance MySQL PostgreSQL CI/CD Docker Kubernetes Problem-solving Team Management SQL Server Back-end Development Marketing Analytics Snowflake CI/CD Pipelines Databricks PySpark E-commerce Mathematics

Job Specifications

Our Business

team.blue is an ecosystem of successful brands working together across regions to provide customers with everything they need to succeed online. 60+ successful brands make up the group; with a team of more than 3000+ experts serving its 3.5 million customers across Europe and beyond.

team.blue's brands are a mix of traditional hosting businesses, offering services from domain names, email, shared hosting, e-commerce and server hosting solutions and specialist SaaS providers offering adjacent products such as compliance, marketing tools and team collaboration products. This broad product offering makes it a one-stop partner for online businesses and entrepreneurs across Europe.

Position Overview

As a senior Data Engineer, you will develop and maintain the central data warehouse (DWH) and help us take the most value out of our data and meet our increasing data needs. Your work assignments will center around back-end development, focusing on building optimized data structures with best-in-class load performance. You will help us fast and efficiently integrate newly acquired businesses into the central DWH, besides improving and expanding already existing structures to enable an even more data-driven approach. To this extent, your role will be crucial in supporting the Data & Analytics team in increasing the data available centrally and its performing, easy access for end users. You will play a key role in transforming data into actionable insights that inform business strategies, improve customer processes, and drive overall performance. 

Responsibilities:

Design, develop, and maintain efficient and scalable data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources, ensuring timely and accurate data delivery to downstream systems and applications.
Design and implement data models and schemas to support analytical and operational requirements, ensuring data integrity, consistency, and efficiency.
Manage and optimize data platforms for performance, security, and scalability, including schema design, indexing, and query optimization.
Integrate data from various internal and external sources (APIs, DBs, Files, Streams, Queues, third-party systems etc) to enrich and enhance our data assets.
Understand or quickly learn business concepts in the domains of marketing, finance, Private Equity, M&A and more.
Architectural leadership: Proven ability to design and guide implementation of scalable, secure, and cost-efficient data architectures.
Data governance & best practices: Strong understanding of data modeling, orchestration, observability, and governance principles (lineage, quality, metadata management), with the ability to establish and enforce best practices across teams.

Your Strenghts:

Self-motivated, ambitious, and collaborative
Project and team management whenever needed
Able to turn ideas into actionable steps and solutions
Strong communication skills, translating technical to business language
Adaptable, detail-oriented, and skilled in problem-solving
Capable of managing multiple projects and meeting deadlines in a fast-paced environment

Technical skills required:

Strong proficiency in SQL (advanced query optimization, stored procedures, indexing strategies) and data manipulation across multiple RDBMS platforms: SQL Server, Oracle, PostgreSQL, MySQL, and MariaDB.
Hands-on experience with one or more cloud data platforms like Databricks (PySpark, Delta Lake, Unity Catalog) and Google BigQuery (SQL, partitioning, materialized views).
Familiarity with object storage systems (S3, ADLS, GCS) and data formats (Parquet, ORC, Avro, JSON).
Deep hands-on knowledge with workflow orchestration tools such as Airflow, dbt, or similar.
Hands-on experience with data versioning, schema evolution, and metadata management in distributed systems.
Experience with containerization (Docker, Kubernetes), and CI/CD pipelines.
Excellent knowledge of data warehouse modelling techniques (dimensional model, star schema, snowflake design).

Education and work experience:

At least 7 years of relevant experience, working in data management / engineering, ideally in a related industry
Advanced degree in Computer science, STEM subjects (Mathematics, Physics, Statistics, Engineering, etc), or relevant work experience
Demonstrable track record in developing data intelligence solutions that are stable and high performing; provide business value; and use resources efficiently

Right to Work

At any stage, please be prepared to provide proof of eligibility to work in the

country you’re applying for. Unfortunately, we are unable to support relocation

packages or sponsorship visas

"Come as you are"

Everyone is welcome here. Diversity & Inclusion are at our core. Far above any technical competence, we value respect, openness, and trusted collaboration. We do not tolerate intolerance.

ESG

"At team.blue, our commitment to caring for the environment and each othe

About the Company

team.blue is an ecosystem of successful brands working together across regions to provide customers with everything they need to succeed online. 60+ successful brands make up the group; within those brands, over 3000 experts serve its 3.3+ million customers across Europe and beyond. team.blue's brands are a mix of traditional hosting businesses, offering services from domain names, email, shared hosting, e-commerce and server hosting solutions and specialist SaaS providers offering adjacent products such as compliance, mar... Know more