cover image
Workonomics

Workonomics

www.workonomics.co.uk

4 Jobs

4 Employees

About the Company

WE HELP COMPANIES

We recruit for tech firms of all shapes & sizes.

Start-up | Scale-up | Grown-up

We help companies attract game-changing technical talent.


WE HELP CANDIDATES

We recruit engineers across a range of disciplines.

Software | Product | Infrastructure | Data | ML

We help talented technologists find fulfilling mission-oriented work.

Listed Jobs

Company background Company brand
Company Name
Workonomics
Job Title
Software Engineer
Job Description
**Job Title:** Software Engineer – Product / Distributed Systems **Role Summary:** Design, implement, and maintain backend services that deliver AI‑driven automation for enterprise clients. Focus on building scalable, reliable distributed systems and integrating large‑language‑model capabilities into real‑world operational workflows. **Expectations:** - Deliver production‑grade code with high reliability. - Own feature implementation from requirement gathering to deployment. - Work in a small, collaborative team with direct user interaction. - Benefit from a compensation range that reflects experience level. **Key Responsibilities:** - Develop and test backend applications in a cloud‑native environment. - Design and expose RESTful APIs and event‑driven integrations. - Build scalable data pipelines, queue systems, and operational dashboards. - Embed AI/LLM services to enhance automation workflows. - Collaborate with product, UX, and customer teams to translate business needs into technical solutions. - Monitor system health, troubleshoot incidents, and implement performance improvements. - Participate in code reviews, architectural discussions, and continuous integration practices. **Required Skills:** - 3+ years of software development with strong emphasis on distributed systems (Python, Node.js, Java, or equivalent). - Deep understanding of API design, microservices, and high‑availability architecture. - Hands‑on experience with AWS services (Lambda, ECS/EKS, SQS, SNS, DynamoDB, Step Functions). - Proficiency in data pipeline construction, message queuing, and sensor or transactional data handling. - Curiosity and willingness to experiment with AI/LLM technologies. - Excellent communication to engage with enterprise customers and translate complex workflows into code. - Familiarity with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). - Ability to troubleshoot performance bottlenecks and design scalable solutions. - Comfortable with version control (Git) and issue tracking tools. **Required Education & Certifications:** - Bachelor’s (or Master’s) degree in Computer Science, Software Engineering, or a related field. - Optional: Relevant certifications in AWS Certified Developer – Associate, AWS Certified Solutions Architect – Associate, or similar cloud architecture credentials.
London, United kingdom
On site
19-01-2026
Company background Company brand
Company Name
Workonomics
Job Title
Data Analyst
Job Description
Job Title: Data Analyst Role Summary: Mid-level data analyst responsible for extracting, transforming, and analyzing large-scale datasets to support AI-driven market structure modeling. Works closely with engineering and product teams to develop data pipelines, build feature sets, and deliver actionable insights through dashboards. Expectations: - Deliver high-quality, clean data for machine learning experiments. - Convert raw data into structured features for the web application. - Translate complex analytical results into clear, actionable metrics for non‑technical stakeholders. Key Responsibilities: - Perform statistical analysis and data interpretation on terabyte‑scale datasets. - Design, implement, and maintain data extraction, transformation, and loading (ETL) workflows. - Build and maintain interactive dashboards (Tableau, PowerBI, Hex, Posthog) to monitor key performance indicators. - Collaborate with product and engineering teams to define new data requirements and feature engineering strategies. Required Skills: - Proficiency in Python for data manipulation, scripting, and feature engineering. - Strong SQL skills for data exploration, querying, and analysis. - Experience with data visualization tools (Tableau, PowerBI, Hex, Posthog). - Familiarity with modern data warehousing solutions (Snowflake, BigQuery, Redshift, MotherDuck, ClickHouse). - Solid understanding of statistical methods and data interpretation. - Bonus: knowledge of financial modeling and metrics, and experience with SQL‑based pipeline tools such as dbt or sqlmesh. - Passion for open‑source contributions. Required Education & Certifications: - Bachelor’s degree in Computer Science, Data Science, Statistics, or a related field (or equivalent professional experience). - Relevant certifications in SQL, Python, or data analytics are a plus.
London, United kingdom
Hybrid
04-02-2026
Company background Company brand
Company Name
Workonomics
Job Title
Analytics Engineer
Job Description
**Job title:** Analytics Engineer **Role summary:** Mid‑level analytics engineer will design, build, and maintain scalable data pipelines and models using SQLMesh, dbt, and Python. The role focuses on transforming raw data into reliable datasets, creating dashboards for internal stakeholders, and supporting machine‑learning research by prepping high‑quality data. **Expectations:** * Deliver accurate, test‑driven data models and pipelines that fuel the core web application and internal tools. * Provide robust data preparation and feature engineering for ML experiments. * Translate complex data insights into user‑friendly dashboards for non‑technical audiences. **Key responsibilities:** 1. Design, implement, and maintain data models and ETL pipelines with SQLMesh and dbt. 2. Execute large‑scale data exploration and transformation: write complex SQL queries, optimize performance, and create new metrics. 3. Source, clean, and structure datasets for research teams; ensure data integrity, completeness, and consistency. 4. Build and maintain interactive dashboards (PowerBI, Hex, PostHog, or similar) to track key metrics. 5. Collaborate with data scientists and product teams to iterate on data requirements and feature definitions. 6. Perform thorough data quality checks and maintain documentation of data processes and lineage. **Required skills:** * SQL – advanced modelling, query writing, and optimisation. * dbt / SQLMesh – experience building, testing, and deploying data transformations. * Python – for data manipulation, scripting, and automation. * Dashboard & visualisation tools – PowerBI, Hex, PostHog, or equivalent. * Data warehousing – ClickHouse, Snowflake, BigQuery, or Redshift. * Strong data quality focus: meticulous verification of accuracy, completeness, and consistency. * Excellent communication for conveying technical findings to non‑technical stakeholders. * Bonus: familiarity with financial modelling and metrics. * Passion for open‑source communities. **Required education & certifications:** * Bachelor’s (or higher) degree in Computer Science, Data Science, Statistics, Engineering, or a related technical field. * No mandatory certifications, though familiarity with data‑engineering or analytics certifications (e.g., Google Cloud Data Engineer, SnowPro, etc.) is a plus.
London, United kingdom
Hybrid
26-02-2026
Company background Company brand
Company Name
Workonomics
Job Title
Senior Site Reliability Engineer
Job Description
**Job title:** Senior Site Reliability Engineer **Role Summary:** Senior SRE leading reliability and observability initiatives across a high‑throughput real‑time decision platform. Drives infrastructure modernization (Terraform + Kubernetes), rebuilds observability stack, and explores AI‑assisted operations to meet 5‑nines reliability targets for a multi‑team SaaS product. **Expectations:** - Deliver scalable, secure, highly available infrastructure and observability. - Own end‑to‑end incident response, post‑mortem analysis, and continuous improvement cycles. - Collaborate cross‑functionally with product, security, and dev‑op teams. - Publish best‑practice guidelines and tooling for use by 150+ engineers. - Demonstrate measurable improvements in reliability and observability coverage. **Key Responsibilities:** - Lead migration from legacy CloudFormation/EC2 stacks to Terraform‑based, Kubernetes‑oriented infrastructure. - Design and implement modular, reusable Terraform modules and Kubernetes operators. - Build and maintain observability architecture: logging, metrics, tracing, and alerting—moving from ELK to modern stack (e.g., Loki, Prometheus/Thanos, Tempo or equivalent). - Set up comprehensive instrumentation for microservices in Python, Go, or JavaScript. - Develop and enforce operational guardrails, SLO/SLA definitions, and error budgets. - Pilot AI/ML tools to reduce toil (incident classification, root‑cause discovery, automated remediation). - Mentor junior SREs and engineer teams on best practices. - Participate in on‑call rotation and lead post‑mortem documentation. **Required Skills:** - Deep expertise with AWS services (EKS, EC2, S3, RDS, etc.). - Advanced Terraform skills – module design, state management, CI/CD integration. - Kubernetes fundamentals (cluster ops, helm, CRDs, RBAC, networking). - Modern programming in Python, Go, or JavaScript (API clients, automation scripts). - Proven experience building observability solutions (logs, metrics, traces). - Incident response, root‑cause analysis, SLO/SLA/BLP implementation. - Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins). - Strong command‑line, scripting, and debugging abilities. **Bonus Skills:** - Experience optimizing observability cost/performance (data retention strategies, sampling). - Contributions to open‑source monitoring or reliability tools. - AI/ML experimentation in SRE context (e.g., incident chatbot, anomaly detection). - Knowledge of chaos engineering and reliability throughput testing. **Required Education & Certifications:** - Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent professional experience). - Professional certifications such as AWS Certified Solutions Architect or DevOps Engineer – Professional, or Kubernetes Certified Administrator (CKA) preferred. ---
London, United kingdom
Hybrid
Senior
13-03-2026