Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
This teams' principal responsibility is to rapidly bring up state-of-the-art open-source models, frameworks and data engineering. Success in this role requires a system-minded generalist who thrives in fast-paced bringup environments and is comfortable working across the entire software stack. Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.
Responsibilities
Skills & Qualifications
What We Offer
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Cerebras Systems is the world's fastest AI inference. We are powering the future of generative AI. Follow us for model breakthroughs and real-time AI results.
We’re a team of pioneering computer architects, deep learning researchers, and engineers building a new class of AI supercomputers from the ground up.
Our flagship system, Cerebras CS-3, is powered by the Wafer Scale Engine 3—the world’s largest and fastest AI processor. CS-3s are effortlessly clustered to create the largest AI supercomputers on Earth, while abstracting away the complexity of traditional distributed computing.
From sub-second inference speeds to breakthrough training performance, Cerebras makes it easier to build and deploy state-of-the-art AI—from proprietary enterprise models to open-source projects downloaded millions of times.
Here’s what makes our platform different:
🔦 Sub-second reasoning – Instant intelligence and real-time responsiveness, even at massive scale
⚡ Blazing-fast inference – Up to 100x performance gains over traditional AI infrastructure
🧠 Agentic AI in action – Models that can plan, act, and adapt autonomously
🌍 Scalable infrastructure – Built to move from prototype to global deployment without friction
Cerebras solutions are available in the Cerebras Cloud or on-prem, serving leading enterprises, research labs, and government agencies worldwide.
👉 Learn more: www.cerebras.ai
Join us: https://cerebras.net/careers/