cover image
SEEKR

SEEKR

www.seekr.inc

2 Jobs

6 Employees

About the Company

SEEKR is a recruitment company that resides somewhere between a network and a community it is partly a business and partly a social experiment. In the age of faceless communication through electronics and metric driven “talent acquisition” is there a place for putting people back at the forefront of what this is all about?

Listed Jobs

Company background Company brand
Company Name
SEEKR
Job Title
Lead Backend Engineer
Job Description
Job title: Lead Backend Engineer Role Summary: Architect, build, and operate the data foundation for a high‑velocity cost‑revenue intelligence platform that ingests millions of real‑time events from autonomous AI agents. Own end‑to‑end pipelines, databases, APIs, and predictive analytics to deliver real‑time margin insights for AI companies. Expectations: - Design and scale distributed, low‑latency data pipelines that process millions of events daily. - Own the architecture of storage, attribution graph engines, and analytical workloads. - Deliver robust, secure, observable services that enable real‑time dashboards and partner integrations. - Drive data‑driven innovation by integrating ML for anomaly detection, forecasting, and margin optimization. - Lead technical decisions, mentor junior engineers, and set standards for reliability and performance. Key Responsibilities: - Build and maintain streaming data pipelines using Kafka/Cloudflare Workers or equivalent. - Model and implement high‑throughput databases (PostgreSQL, ClickHouse) and graph engines for event attribution. - Design, document, and evolve REST/GraphQL APIs (or minimalist custom protocols) for external consumption. - Deploy services on AWS/GCP using Docker, Kubernetes, and CI/CD pipelines. - Implement observability, logging, tracing, and security controls across the stack. - Collaborate with front‑end, analytics, and ML teams to operationalize predictive models. - Continuously improve performance, cost efficiency, and reliability of back‑end services. - Keep abreast of new data‑engineering and AI tooling; propose optimal solutions. Required Skills: - 3+ years of designing and scaling distributed backend systems with high throughput. - Mastery of Python or Go (preferred) for backend development and data processing. - Strong experience with event‑driven architectures and real‑time streaming frameworks. - Deep knowledge of SQL (PostgreSQL) and analytical databases (ClickHouse, BigQuery). - Expertise in API design (REST, GraphQL) and performance optimization. - Proficiency in cloud infrastructure (AWS or GCP), containerization, Kubernetes, and CI/CD. - Solid understanding of security fundamentals, observability, and reliability engineering. - Excellent communication, documentation, and problem‑solving skills. - Bonus: experience with ML pipelines, agentic AI domains, or scaling startups. Required Education & Certifications: - Bachelor’s or higher degree in Computer Science, Engineering, or related field. - Relevant certifications (e.g., AWS Certified Solutions Architect, GCP Professional Data Engineer) are a plus.
London, United kingdom
Hybrid
Senior
02-10-2025
Company background Company brand
Company Name
SEEKR
Job Title
AI Engineer
Job Description
**Job title** AI Engineer **Role Summary** Design, develop, and deploy large language model (LLM) components for a commercial product. Work in a fast‑paced, startup environment with a focus on experimentation, iterative improvement, and delivering AI‑driven features that meet customer needs. **Expectations** - Deliver production‑ready AI solutions on schedule. - Drive continuous improvement of model accuracy, latency, and cost efficiency. - Collaborate with product, data science, and operations teams to align AI capabilities with business goals. - Participate in code reviews, documentation, and knowledge sharing. - Operate in a hybrid work setting with remote flexibility. **Key Responsibilities** - Build and maintain LLM pipelines (data ingestion, preprocessing, fine‑tuning, inference). - Fine‑tune, evaluate, and optimize large language models on proprietary data. - Design and implement APIs, microservices, and deployment workflows for AI features. - Monitor and troubleshoot model performance, latency, and run‑time costs. - Experiment with prompt engineering, hyper‑parameter tuning, and architectural variations. - Integrate AI components into the broader product stack and CI/CD pipelines. - Ensure compliance with data privacy, security, and AI ethics guidelines. - Communicate results to stakeholders and translate business requirements into technical solutions. **Required Skills** - Strong programming in Python with experience in PyTorch, TensorFlow, or Hugging Face Transformers. - Proven commercial experience building LLM‑based products or services. - Expertise in ML/AI operations: Docker, Kubernetes, CI/CD, and cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML). - Familiarity with data pipelines, ETL, streaming inference, and real‑time performance monitoring. - Ability to conduct prompt engineering, hyper‑parameter optimization, and model compression. - Proficiency in unit testing, automated testing, and continuous integration. - Excellent problem‑solving, debugging, and cross‑functional communication skills. - Understanding of AI ethics, bias mitigation, and data privacy best practices. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related discipline. - Optional certifications: AWS Certified Machine Learning – Specialty, Google Cloud Professional Machine Learning Engineer, Azure AI Engineer Associate, or equivalent. ---
London, United kingdom
Hybrid
08-10-2025