Cerebras

Principal Engineer, AI Inference Reliability

Cerebras  •  Canada (Remote)  •  4 months ago
Apply
AI can make mistakes so check important info. Chat history is never stored.
80
AI Success™

Job Description

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.

In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.

About the team

The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.

About the role

We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.

In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.

If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.

Responsibilities:

  • Define and drive reliability strategy: establish SLOs and ensure alignment across engineering.
  • Design and implement reliability mechanisms: build and evolve systems for fault detection, graceful degradation, failover, throttling, and recovery across multiple regions and data centers.
  • Lead large-scale incident management: own postmortems, root-cause analysis, and prevention loops for reliability-related incidents.
  • Architect for reliability and observability: influence system design for redundancy, durability, and debuggability.
  • Develop reliability tooling: create internal tools and frameworks for chaos testing, load simulation, and distributed fault injection.
  • Collaborate broadly: work across software, infrastructure, and hardware teams to ensure reliability is embedded into every layer of our inference service.
  • Monitor and communicate reliability metrics: build dashboards and alerts that measure service health and provide actionable insights.
  • Mentor and influence: guide engineers and set best practices for designing, testing, and operating reliable large-scale systems.

Skills & Qualifications:

  • Bachelor's or master's degree in computer science or related field.
  • 7+ years of experience in backend, infrastructure, or reliability engineering for large-scale distributed systems.
  • Strong programming skills in at least one popular backend programming language such as Python, C++, Go, or Rust.
  • Deep and hard-earned experience of reliability principles: SLO/SLI/SLA design, incident response, and postmortem culture.
  • Excellent communication and cross-functional leadership skills.
  • Bonus: prior experience building large-scale AI infrastructure systems.

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Cerebras

About Cerebras

Cerebras Systems is the world's fastest AI inference. We are powering the future of generative AI. Follow us for model breakthroughs and real-time AI results.

We’re a team of pioneering computer architects, deep learning researchers, and engineers building a new class of AI supercomputers from the ground up.

Our flagship system, Cerebras CS-3, is powered by the Wafer Scale Engine 3—the world’s largest and fastest AI processor. CS-3s are effortlessly clustered to create the largest AI supercomputers on Earth, while abstracting away the complexity of traditional distributed computing.

From sub-second inference speeds to breakthrough training performance, Cerebras makes it easier to build and deploy state-of-the-art AI—from proprietary enterprise models to open-source projects downloaded millions of times.

Here’s what makes our platform different:

🔦 Sub-second reasoning – Instant intelligence and real-time responsiveness, even at massive scale

⚡ Blazing-fast inference – Up to 100x performance gains over traditional AI infrastructure

🧠 Agentic AI in action – Models that can plan, act, and adapt autonomously

🌍 Scalable infrastructure – Built to move from prototype to global deployment without friction

Cerebras solutions are available in the Cerebras Cloud or on-prem, serving leading enterprises, research labs, and government agencies worldwide.

👉 Learn more: www.cerebras.ai

Join us: https://cerebras.net/careers/

Industry
Hardware & Semiconductors
Company Size
501-1,000 employees
Headquarters
Sunnyvale, California
Year Founded
Unknown
Social Media