
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Nebius Managed Kubernetes (mk8s) exists to give customers seamless access to the Nebius platform through the Kubernetes API—ensuring their AI workloads run fast, reliably, and with minimal operational overhead
In this role, you will lead the mk8s Core Team, one of three sub-teams within mk8s, responsible for the service’s core backend, key platform integrations, and mission-critical add-ons.
At Nebius, team leaders are hands-on by design. We’re looking for someone who is currently a senior or staff-level engineer (or has been in the past) and is comfortable diving deep into complex systems. You will begin as an individual contributor to build context and technical ownership and then transition into leadership responsibilities as you ramp up.
In this position, you will be responsible to:
What you will bring to the table:
It would be an added bonus if you had:
What we offer
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!

The Nebius AI Cloud brings powerful full-stack infrastructure for AI developers and practitioners across startups, enterprises and science institutes to build and deploy generative AI applications and rapidly deliver scientific breakthroughs by training and running ML models within a secure, high-performance, and cost-optimized cloud environment.