cover image
Open Cube Ltd (AI CPU)

Open Cube Ltd (AI CPU)

www.opencubeai.com

1 Job

1 Employees

About the Company

Open Cube Ltd delivers next-generation AI computing through PiCube, a distributed processor architecture designed to make high-performance AI accessible, efficient and scalable. Our approach departs from traditional GPU-based designs by combining a CISC-driven tensor processor with process-in-memory technology, enabling powerful AI computation on mature fabrication nodes with affordable, high-volume manufacturability—without the need for expensive HBM or advanced 3D packaging. PiCube-based accelerator cards and servers provide competitive performance for large language models (LLMs), multimodal workloads and edge-AI inference, while consuming significantly less energy. Open Cube collaborates internationally with universities, researchers and industry partners to explore new application domains and advance the possibilities of low-cost, low-energy AI computing.

Listed Jobs

Company background Company brand
Company Name
Open Cube Ltd (AI CPU)
Job Title
AI Hardware–Software Systems Engineer
Job Description
Job Title: AI Hardware–Software Systems Engineer Role Summary: Design, develop, and optimize system-level software for a custom AI computing architecture that blends a CISC-driven tensor processor with process‑in‑memory technology. The position involves porting large language model inference, creating high‑performance AI operators, and extending a Python‑based compiler ecosystem to support efficient execution on the bespoke instruction set and execution model. Expectations: - Lead and execute software porting & optimisation for LLM inference on the new platform. - Develop, benchmark, and refine AI operators and kernel libraries in close collaboration with ISA and compiler teams. - Analyze performance, numerical accuracy, and data movement to produce actionable technical reports. - Contribute to system design reviews, ensuring software aligns with hardware capabilities and performance goals. Key Responsibilities: 1. Port LLM inference workloads to the custom AI architecture, optimizing for latency and throughput. 2. Design, implement, and tune AI operators and kernel libraries using C/C++ and assembly where necessary. 3. Extend and maintain Python‑based compiler toolchains for operator mapping and execution. 4. Interface with low‑level system components (OS, drivers, memory) to ensure correctness and efficiency. 5. Conduct profiling, performance analysis, and numerical debugging; document findings and recommend improvements. 6. Collaborate with senior hardware, firmware, and software engineers on system integration and design reviews. Required Skills: - Proficient in low‑level systems programming: C/C++, assembly; familiarity with Linux development. - Strong understanding of computer architecture, operating systems, and kernel development. - Experience with compiler design, code generation, or runtime systems is a plus. - Ability to profile, analyze, and optimize performance‑critical code. - Comfortable using version control (Git) and collaborating in cross‑functional teams. Required Education & Certifications: - PhD (recent) in Computer Science, Electrical Engineering, or related field; **or** a robust BSc/MSc in Computer Science with significant low‑level or systems projects. - No mandatory certifications, but knowledge of semiconductor or chip‑development processes is advantageous.
Birmingham, United kingdom
Hybrid
25-01-2026