Job Specifications
Backend Software Engineer — Data Ingestion
At Klaviyo, we love tackling tough engineering problems and look for builders who are passionate about designing, owning, and scaling systems end to end. We value engineers who take initiative, push technical boundaries, and enjoy learning new technologies to make an impact every day.
About the Role
We’re looking for a backend engineer with experience designing and optimizing large-scale data systems — particularly those that power high-volume data ingestion and processing. You’ll help evolve the backbone of Klaviyo’s data infrastructure, enabling billions of daily events to flow reliably through our platform and fuel analytics, personalization, and AI-driven insights.
You’ll own key components of our data ingestion and processing pipelines, helping shape how data is collected, stored, and made available across Klaviyo. This role blends software engineering, distributed systems, and data infrastructure — ideal for someone who loves solving complex scaling challenges.
Please note: This role is based in Boston, MA and requires a hybrid in-office component.
Tech Stack
You don’t need to know everything on day one, but familiarity with some of these technologies will help you ramp quickly:
Languages: Python (Node or Java experience is a plus)
Data Processing: Apache Spark, Apache Flink
Workflow Orchestration: Airflow
Streaming: Kafka, Apache Pulsar
Storage: MySQL, AWS S3, Redshift
Infrastructure: Kubernetes, AWS (EMR, Lambda, etc.)
How You’ll Make an Impact
Build and optimize scalable, fault-tolerant data ingestion pipelines that process billions of daily events.
Design and maintain real-time and batch data workflows, ensuring high availability and low latency.
Implement robust failure recovery and monitoring mechanisms to keep our systems reliable at scale.
Optimize distributed compute and storage systems for performance and cost efficiency.
Collaborate with product and infrastructure teams to deliver clean, actionable datasets that power analytics and machine learning.
Contribute to the technical direction of our data platform, driving improvements in reliability, observability, and scalability.
Mentor peers, share best practices, and help shape the future of Klaviyo’s data systems.
What We’re Looking For
4+ years of software engineering experience, including 2+ years focused on data-intensive or distributed systems.
Strong coding skills in Python and SQL (experience with backend development frameworks is a plus).
Hands-on experience with distributed data frameworks such as Spark or Flink.
Proven experience designing and maintaining ETL/ELT pipelines in the cloud (AWS preferred).
Familiarity with streaming systems (Kafka, Pulsar) and workflow orchestration tools (Airflow).
Understanding of data modeling, storage optimization, and data governance best practices.
Excellent problem-solving and collaboration skills; able to work effectively in fast-paced, cross-functional environments.
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience.
Curiosity and excitement about leveraging AI tools and workflows to improve efficiency and scale.
About the Company
Klaviyo is the only CRM built for B2C Brands. Powered by its built-in data platform and AI insights, Klaviyo combines marketing automation, analytics, and customer service into one unified solution. With your data all in one place, you can know, engage, and grow your audience like never before.
Powered by its built-in data platform and AI insights, Klaviyo combines marketing automation, analytics, and customer service into one unified solution, making it easy for businesses to know their customers and grow faster. Klaviyo ...
Know more