cover image
StitcherAI

StitcherAI

stitcher.ai

1 Job

5 Employees

About the Company

StitcherAI provides an essential system of record for enterprise IT Finance teams striving to maximize the value of their IT investments. To tackle today’s IT Finance challenges, organizations require accurate, actionable, business-aligned data and an engagement model that enables alignment and action across the enterprise. Traditional FinOps and IT Finance tools don’t deliver results, prompting many companies to develop their own solutions that often have risks and limitations. StitcherAI addresses these gaps with its AI-powered system of record for finance. It creates business-aligned IT Finance datasets and delivers critical data directly to stakeholders, tools, and business processes—enabling meaningful action. Connect with us to discover the future of IT Finance!

Listed Jobs

Company background Company brand
Company Name
StitcherAI
Job Title
Back End Developer
Job Description
**Job Title** Back End Developer (Staff Data Engineer) **Role Summary** Design, develop, and operate high‑performance, cloud‑native backend and data pipelines for an AI‑powered low‑code cost analytics platform. Deliver scalable, performant solutions that integrate across multi‑cloud environments, SaaS APIs, and diverse storage systems, while providing reliable REST APIs and supporting the product lifecycle from design to deployment. **Expectations** - Act with a “founder” mindset: proactive, end‑to‑end ownership, and strong work ethic. - Consistently deliver measurable results and scale the product and team. - Excel in cross‑functional collaboration, communication, and adaptability to shifting priorities in a startup culture. **Key Responsibilities** 1. Build and maintain enterprise‑scale data pipelines, ensuring reliability, performance, and cost‑efficiency. 2. Design backend services with a focus on high throughput, low latency, and scalability using open‑source technologies. 3. Orchestrate data workflows with Temporal, Airflow, or equivalent systems. 4. Integrate platform components with multiple cloud providers (AWS, Azure, GCP), SaaS APIs, and various storage formats (Parquet, CSV, Avro). 5. Develop, test, and deploy cloud‑native microservices (REST/APIs) in Docker/Kubernetes clusters. 6. Implement monitoring, logging, metrics, and CI/CD pipelines. 7. Collaborate with data scientists and product teams to expose analytics features through the low‑code interface. 8. Perform performance tuning of distributed aggregations, transformations, clustering, partitioning, and storage strategies. **Required Skills** - 5+ years building/maintaining large‑scale data platforms. - 3+ years Python and Rust proficiency in cloud‑native development. - Expertise with Pandas, Polars, and performance‑critical data processing. - Hands‑on experience with distributed data technologies: Hadoop, Hive, Spark, EMR. - Proven ability to orchestrate pipelines using Temporal, Airflow, or similar. - Cloud integration: AWS, Azure, GCP, SaaS provider APIs, and storage systems. - Backend development: REST APIs, JSON, gRPC. - DevOps: Kubernetes, Docker, CI/CD, logging, metrics, cloud‑native best practices. - Optional: AI/ML forecasting, anomaly detection, GenAI model training/serving; familiarity with FinOps concepts. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or equivalent professional experience. - Relevant certifications (e.g., AWS Certified Solutions Architect, Certified Kubernetes Administrator, or similar) are advantageous but not mandatory.
Toronto, Canada
Remote
Mid level
18-12-2025