cover image
Ideogram

Ideogram

ideogram.ai

5 Jobs

46 Employees

About the Company

Ideogram is defining the future of design in the age of AI. Turn your ideas into stunning graphic designs, in a matter of seconds. It's pronounced eye-dee-oh-gram.

Listed Jobs

Company background Company brand
Company Name
Ideogram
Job Title
Machine Learning Research Intern
Job Description
**Job Title:** Machine Learning Research Intern **Role Summary:** Internship focused on advancing visual generative models for design applications. Work alongside research and engineering teams to implement, evaluate, and iterate on diffusion‑based and transformer‑based models, contributing code, experiments, and insights that inform product features. **Expectations:** - Complete high‑quality research‑level implementations and experiments during the summer term. - Translate novel ideas from recent publications into reproducible code. - Produce clear documentation, reports, and presentations of results for cross‑functional stakeholders. - Engage in rapid prototyping and iterative testing to demonstrate feasibility for production. - Maintain code quality standards and participate in peer reviews. **Key Responsibilities:** - Review and synthesize cutting‑edge research papers on generative models, diffusion, and multimodal conditioning. - Design, implement, and benchmark diffusion‑based image generation and captioning models using JAX (preferred) or PyTorch. - Develop and maintain experiment pipelines, including data preprocessing, hyper‑parameter tuning, and metric collection. - Collaborate with product and design teams to identify viable use cases and validate model outputs. - Contribute to internal or open‑source code repositories, ensuring reproducibility and scalability. - Communicate findings through concise write‑ups, technical demos, and slide decks. **Required Skills:** - Strong fundamentals in deep learning with a focus on generative modeling. - Proficiency in Python; experience with JAX (preferred) and willingness to work with PyTorch. - Ability to read and interpret research papers, then implement core concepts efficiently. - Knowledge of transformer architectures and diffusion models. - Excellent problem‑solving, analytical, and debugging skills. - Effective verbal and written communication in a cross‑functional team environment. - Passion for applying image generation techniques to creative design workflows. - Portfolio of publications, open‑source contributions, or research projects demonstrating depth. **Required Education & Certifications:** - Currently enrolled in a Ph.D. or Master’s program in Computer Science, Machine Learning, Computer Vision, Graphics, NLP, or a closely related field. - Exceptional undergraduate candidates with comparable research experience may also be considered.
New york, United states
On site
Fresher
03-11-2025
Company background Company brand
Company Name
Ideogram
Job Title
Research Engineer / Research Scientist (Post-training)
Job Description
**Job title** Research Engineer / Research Scientist (Post‑training) **Role summary** Design, implement, and maintain end‑to‑end post‑training pipelines for text‑to‑image foundation models. Drive measurable improvements via RLHF, RLAIF, and personalization while ensuring scalable, high‑throughput fine‑tune and evaluation workflows. **Expectations** * 5+ years experience building ML models in JAX, PyTorch, or TensorFlow. * Deep familiarity with generative foundations (Transformers, VAEs, denoising diffusion). * Proven track record of ML innovation and practical deployment. * Ability to debug and optimize models for performance and quality. * Strong collaborative mindset with engineers and researchers. **Key responsibilities** 1. Develop data strategy and pipeline for fine‑tuning and evaluation of foundation models. 2. Implement and maintain high‑throughput fine‑tune, RLHF, and RLAIF workflows. 3. Prototype personalization/curation techniques for text‑to‑image generation. 4. Conduct rigorous experiments, benchmark results, and interpret findings. 5. Debug, profile, and optimize model performance, including low‑level optimizations when needed. 6. Collaborate closely with research and engineering teams on model architecture and deployment. **Required skills** * Proficiency in JAX, PyTorch, or TensorFlow; implementation from scratch of Transformer, VAE, or diffusion models. * Experience with RLHF/RLAIF or similar reinforcement‑learning‑to‑human‑feedback pipelines. * Strong debugging and profiling skills for ML models. * Familiarity with Kubernetes and Docker is desirable. * Optional: CUDA kernel development and low‑level GPU optimization. **Required education & certifications** Bachelor’s degree or higher in Computer Science, Electrical Engineering, Applied Mathematics, or a related field. Advanced degrees (Master’s/Ph.D.) preferred but not required.
Toronto, Canada
On site
Mid level
10-12-2025
Company background Company brand
Company Name
Ideogram
Job Title
Machine Learning Engineer, Applied AI
Job Description
**Job Title** Machine Learning Engineer, Applied AI **Role Summary** Lead the application of cutting‑edge generative AI models into production features for a creative design platform. Translate research‑grade diffusion, transformer, and multimodal models into scalable, low‑latency, cost‑effective backend services that power text‑to‑image, image‑to‑text, and image enhancement APIs. Own end‑to‑end ML lifecycle—from data curation and training to evaluation, deployment, and monitoring—while collaborating closely with product, engineering, and infra teams to deliver measurable improvements in quality, latency, and operational efficiency. **Expectations** - Deliver production‑ready ML systems that show clear gains in quality, latency, or cost. - Operate with high ownership, rapid iteration, and cross‑functional collaboration. - Lead 0‑to‑1 AI initiatives and shape applied AI best practices. - Maintain rigorous safety, monitoring, and reliability standards for deployed models. **Key Responsibilities** - Design, develop, and maintain backend ML services in Python using PyTorch or JAX. - Curate, label, and clean dataset pipelines; create balanced evaluation corpora and failure‑mode analyses. - Build benchmarks, define success metrics (e.g., FID, precision/recall, latency, cost), and run error analyses. - Fine‑tune generative models for multimodal creative use cases and deploy them to production APIs. - Collaborate with product to set success criteria and iterate features through short release cycles. - Debug numerical stability across training and inference; optimize for throughput and latency. - Monitor model performance post‑deployment, implement safety checks, and drive continuous improvement. - Lead cross‑team efforts to integrate infrastructure, scaling, and observability into ML workflows. **Required Skills** - 3+ years experience building and shipping ML products. - Strong programming in Python; deep familiarity with PyTorch or JAX. - Expertise in modern deep‑learning architectures: transformers, diffusion models, multimodal encoders. - Proven ability to design evaluation metrics, run benchmarks, and conduct error analyses. - Experience with data curation, labeling pipelines, and dataset versioning. - Solid understanding of scaling ML models for production (batching, quantization, GPU/TPU inference). - Excellent communication and collaboration skills; ability to translate ML concepts to product stakeholders. - Comfortable with rapid prototyping, A/B testing, and data‑driven decision making. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or related field. - Relevant certifications (e.g., TensorFlow Professional, PyTorch Advanced) are a plus but not mandatory.
Toronto, Canada
Hybrid
10-12-2025
Company background Company brand
Company Name
Ideogram
Job Title
Research Engineer / Research Scientist (Pre-training)
Job Description
Job title: Research Engineer / Research Scientist (Pre‑training) Role Summary Drive the frontier of visual generative models through large‑scale pre‑training of text‑to‑image foundation models. Shape objectives, algorithms, data strategies, and system architecture to produce production‑ready models that power millions of users, while collaborating closely with research and engineering teams. Expactations - Publish first‑author research in top AI conferences (NeurIPS, ICML, ICLR, CVPR, ECCV, ICCV, ACL, EMNLP). - Translate novel ideas into production‑grade models with robust evaluation and reproducibility. - Consistently meet project milestones and deliver actionable insights to product teams. - Communicate complex findings clearly to both technical and non‑technical audiences. Key Responsibilities - Design and execute large‑scale pre‑training pipelines for text‑to‑image foundations. - Develop and refine training objectives, loss functions, and regularization techniques. - Curate, clean, and augment massive multimodal datasets for optimal model performance. - Optimize model efficiency through algorithmic and systems innovations (distributed training, memory optimizations). - Evaluate model outputs across metrics (FID, CLIP similarity, user studies) and iterate rapidly. - Collaborate with product, UX, and engineering to translate research into deployed features. - Maintain high code quality, reproducibility, and version control standards. - Mentor junior researchers and contribute to an open, collaborative research culture. Required Skills - 5+ years AI research experience focused on foundation model training, fine‑tuning, and experimentation. - Proven track record of first‑author papers at leading AI venues. - Expertise in deep learning frameworks (PyTorch, JAX). - Strong algorithmic thinking and experience with diffusion, GANs, or other generative modeling paradigms. - Proficiency in Python (and preferably C++/CUDA) with solid software engineering fundamentals. - Ability to design experiments, debug complex systems, and optimize performance. - Excellent written and verbal communication skills. Required Education & Certifications - PhD or Master’s degree in Computer Science, Machine Learning, or related field. - Demonstrated academic excellence through first‑author publications at NeurIPS, ICML, ICLR, CVPR, ECCV, ICCV, ACL, or EMNLP. - No mandatory professional certifications required.
Toronto, Canada
On site
Mid level
11-12-2025