cover image
Zendar

Zendar

www.zendar.io

1 Job

64 Employees

About the Company

Zendar develops perception technology that enables autonomous systems to accurately understand and interact with the physical world. Zendar began as a pioneer in radar science, developing first of its kind "Distributed Aperture Radar" which enabled high-resolution 4D imaging (3D + Velocity) with small, affordable radar sensors. Today, Zendar combines its innovative signal processing and sensor fusion techniques with artificial intelligence for highly-accurate, real-time object mapping that enables autonomous systems to see the world and interact with it safely and effectively. We are a highly diverse group of individuals driven by a shared mission: Ensuring that the future of autonomy is both safe and accessible to all. WORKING AT ZENDAR Zendar is committed to supporting our employees through professional growth, competitive benefits, and by fostering a fun, exciting and rewarding work culture where individuals can thrive. The passion and expertise of our team contribute to our company vision. In addition to competitive compensation and equity, Zendar also offers excellent benefits which includes daily catered lunch, medical, dental, and vision, life insurance, FSA's and more.

Listed Jobs

Company background Company brand
Company Name
Zendar
Job Title
Machine Learning Research Intern (for PhDs) - World Modeling / Scene Understanding
Job Description
**Job Title:** Machine Learning Research Intern (for PhDs) - World Modeling / Scene Understanding **Role Summary:** Contribute to developing scene-level world models for autonomous systems in automotive and robotics, integrating multi-sensor data (radar, camera, lidar) to estimate occupancy, motion, and environmental dynamics. Focus on spatio-temporal deep learning, sensor fusion, and scalable perception solutions for real-time autonomy applications. **Expectations:** Current PhD candidate in Computer Science, Robotics, or related fields. Demonstrated expertise in spatio-temporal modeling, scene-level world modeling, and sensor data analysis. Prior experience with unsupervised/semi-supervised learning and deep learning frameworks. **Key Responsibilities:** - Design and implement scene-level world models for occupancy, flow, and dynamics. - Develop and evaluate spatio-temporal deep learning architectures for real-world sensor data. - Analyze multi-modal datasets (radar, camera, lidar) to refine model representations. - Define and validate metrics for model performance across edge cases. - Collaborate with engineering teams to prototype and deploy perception models. **Required Skills:** - Proficiency in Python and deep learning frameworks (PyTorch, TensorFlow). - Experience with spatio-temporal modeling, occupancy-flow estimation, and sensor fusion. - Strong background in computer vision, robotics perception, or machine learning. - Familiarity with real-world sensor data processing and autonomy applications. **Required Education & Certifications:** - Pursuing PhD in Computer Science, Robotics, Electrical Engineering, or equivalent. - No additional certifications required.
Berkeley, United states
On site
Fresher
03-03-2026