cover image
dataroots

dataroots

dataroots.io

5 Jobs

138 Employees

About the Company

Our Vision:

At Dataroots, we harness the power of AI and data-driven solutions to revolutionize how companies operate. We believe in turning data into a competitive advantage in terms of processes, customer interactions, and legal compliance.

What We Do:

Data Strategy & Governance: Partnering with you for a data-driven future. We focus on enhancing data maturity, driving impactful outcomes, and ensuring compliance.

Artificial Intelligence: Combining domain expertise with advanced AI to optimize operations, mitigate risks, and enrich customer experiences.

Cloud Native Data Platforms: Specializing in robust, scalable cloud-native platforms for an efficient data ecosystem.

Our Culture:

Diversity: Championing diversity for creativity and innovation.

Knowledge Sharing: Promoting growth through open and collaborative learning.

Passion & Pride: Driven by our passion for tech excellence and pride in our work.

Trustworthiness: Building lasting, respectful partnerships.

Global Presence:

As part of the Talan Group, we leverage a global network to deliver top-notch expertise and support. Our presence enables us to provide innovative solutions and managed services worldwide.

Join Us:

We're looking for passionate individuals to join our journey in reshaping the AI and data landscape.

Get in Touch:

Let's connect! We're here for collaborations, questions, or a chat over coffee.

Listed Jobs

Company background Company brand
Company Name
dataroots
Job Title
Machine Learning Engineer
Job Description
**Job Title** Machine Learning Engineer **Role Summary** Design, develop, and deploy end‑to‑end machine learning and deep‑learning solutions within a DataOps/MLOps framework, collaborating with data engineers, cloud architects, and stakeholders to deliver client‑centric, high‑performance models. **Expectations** - 2+ years working with machine learning and big data. - Master’s or PhD in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Physics). - Demonstrated passion for advancing AI solutions and continuous learning. - English fluent; Dutch spoken. **Key Responsibilities** - Build, train, and evaluate machine learning and deep‑learning models using production‑ready frameworks. - Implement MLOps pipelines: model versioning, automated testing, CI/CD, monitoring, and rollback mechanisms. - Deploy models to cloud platforms (AWS, Azure, or GCP) and manage associated infrastructure. - Collaborate with data engineers to engineer robust data pipelines and feature stores. - Mentor junior team members and share knowledge across the organization. - Translate complex technical concepts into clear communication for non‑technical stakeholders. **Required Skills** - Proficient in Python (libraries such as scikit‑learn, TensorFlow, PyTorch, or Keras) and SQL. - Experience with ML versioning tools (e.g., MLflow, DVC). - Familiarity with cloud services for ML: compute, storage, and managed ML services. - Understanding of software design principles and type safety. - Strong problem‑solving abilities and pragmatic approach to large‑scale data challenges. - Excellent communication and documentation skills. **Required Education & Certifications** - Master’s or PhD in a quantitative discipline. - Relevant cloud certifications (AWS Certified Machine Learning – Specialty, Azure AI Engineer Associate, or Google Cloud Professional Machine Learning Engineer) are advantageous but not mandatory.
Brussels, Belgium
Hybrid
Junior
12-01-2026
Company background Company brand
Company Name
dataroots
Job Title
Data platform engineer
Job Description
Job title: Data Platform Engineer Role Summary: Design, build, and maintain scalable, resilient data platforms that enable data engineers, analysts, and scientists to transform raw data into actionable insights efficiently and securely. Own infrastructure, tooling, and standards across cloud, orchestration, governance, and observability to balance performance, cost, and reliability. Expactations: Own the end‑to‑end data platform lifecycle, from cloud provisioning to pipeline execution and monitoring. Collaborate with cross‑functional teams to translate data requirements into platform capabilities. Ensure high availability, data security, compliance, and observability while continuously optimizing for performance and cost. Drive DataOps and MLOps practices to deliver production‑ready solutions. Key Responsibilities: - Design and implement scalable cloud‑based data platforms (AWS, Azure, GCP, or hybrid). - Build and maintain ELT pipelines using Airflow, Prefect, Dagster, Spark, dbt, or equivalent tools. - Provision and manage infrastructure with Terraform, Docker, Kubernetes, and related tooling. - Deploy, configure, and optimize data services such as Databricks, Snowflake, Azure Data Services, or AWS data services. - Implement data governance, security, and compliance controls across the platform. - Establish observability, monitoring, and alerting for data pipelines and infrastructure. - Create and maintain CI/CD pipelines for data and infrastructure code. - Troubleshoot platform issues, perform root‑cause analysis, and implement preventive measures. - Mentor and collaborate with data engineers, scientists, and analysts to promote best practices. Required Skills: - Strong data engineering and cloud architecture expertise. - Proficiency in ELT pipeline tools: Airflow, Prefect, Dagster, Spark, dbt, etc. - Software engineering skills: Git, Python, SQL. - Cloud engineering: Terraform, Docker, Kubernetes, CI/CD. - Experience with at least one major data platform (Databricks, Snowflake, Fabric, Azure Data Services, AWS data services). - Knowledge of data governance, security, and compliance frameworks. - Familiarity with observability tools (monitoring, logging, alerting). - Excellent problem‑solving, communication, and collaboration skills. - Agile mindset and ability to adapt to evolving data needs. Required Education & Certifications: - Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering, or related field. - Cloud platform certifications (AWS Certified Data Analytics, Azure Data Engineer Associate, GCP Professional Data Engineer) preferred. - Databricks or Snowflake certifications are a plus.
Ghent, Belgium
Hybrid
22-01-2026
Company background Company brand
Company Name
dataroots
Job Title
Analytics Engineer
Job Description
**Job Title**: Analytics Engineer **Role Summary** Design, build, and maintain analytics-ready data assets by translating business requirements into scalable data models and transformations. Collaborate closely with data engineers, analysts, and stakeholders to deliver high‑quality, query‑ready datasets for reporting and insights. **Expectations** - Deliver production‑grade data models and transformations on time. - Translate business questions into technical solutions while challenging unrealistic requirements. - Maintain strong communication with non‑technical stakeholders and technical teams. - Keep abreast of cloud, tooling, and industry best practices. **Key Responsibilities** - Build and test data models using dbt or equivalent frameworks. - Write and optimize complex SQL queries for data extraction and transformation. - Develop and maintain data pipelines from source to data warehouse. - Collaborate with data engineers on data ingestion, storage, and performance tuning. - Document data models, assumptions, and lineage for transparency. - Participate in code reviews and CI/CD processes to ensure quality and version control. - Communicate findings and model designs to business stakeholders in clear, actionable language. **Required Skills** - 3+ years in data engineering or analytics roles involving end‑to‑end data projects. - Advanced SQL proficiency; familiarity with Python scripts. - Experience with dbt and BI tools (Tableau, Power BI, Looker, etc.). - Strong data‑modeling concepts (star, snowflake, dimensional, functional). - Comfort with major cloud providers (AWS, GCP, Azure) and their data services. - Knowledge of CI/CD principles and Git for version control. - Fluent in English; knowledge of Dutch and/or French is a plus. - Analytical mindset with strong problem‑solving and communication skills. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field. - Optional certifications: Snowflake, AWS BigQuery, GCP BigQuery, dbt, or similar data platform credentials.
Louvain, Belgium
Hybrid
Junior
03-02-2026
Company background Company brand
Company Name
dataroots
Job Title
Junior Machine Learning Engineer
Job Description
**Job Title** Junior Machine Learning Engineer **Role Summary** Entry‑level engineer responsible for designing, building, and deploying machine‑learning models in a consulting environment. Works with data engineers, cloud architects, and data strategists to deliver AI‑driven solutions that meet client business objectives. Requires foundational knowledge of ML theory, programming in Python, and exposure to cloud platforms. **Expectations** - Develop and experiment with generative AI solutions, applying theoretical knowledge to practical projects. - Collaborate with cross‑functional teams to understand client problems and translate them into model specifications. - Continuously learn new tools, frameworks, and industry trends through internal training and peer knowledge sharing. - Communicate findings and model insights clearly to technical and non‑technical stakeholders. **Key Responsibilities** - Build, train, and fine‑tune ML models, ensuring they are scalable and production‑ready. - Employ Python for data preprocessing, feature engineering, model development, and experimentation pipelines. - Utilize cloud services (AWS, GCP, or Azure) and data platforms such as Spark or Databricks for data ingestion, processing, and model deployment. - Participate in MLOps activities, including version control, CI/CD pipelines, and monitoring of deployed models. - Collaborate with clients to define problems, gather requirements, and deliver actionable AI solutions. - Mentor junior teammates and share knowledge across the team. **Required Skills** - Strong foundation in machine‑learning theory and experience with at least one AI model training/deployment. - Proficient in Python programming. - Familiarity with at least one major cloud platform (AWS, GCP, Azure) and basic cloud services (compute, storage, ML services). - Understanding of Spark, Databricks, or similar big‑data tools. - Basic knowledge of MLOps/CI‑CD pipelines for ML workflows. - Comfortable learning new technologies and frameworks. - Excellent communication skills in English; Dutch fluency required; French advantageous. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. - Relevant certifications in cloud platforms or ML (e.g., AWS Certified Machine Learning, GCP Specialist, Azure ML Engineer) preferred but not mandatory.
Leuven, Belgium
Hybrid
Junior
03-03-2026