About the AI Division
The AI Division is a unique and dedicated group within Ceva, driving innovation in Machine Learning and Generative AI architectures for edge devices and cloud inference.
Our R&D domains span Neural Network Processors (NPU), Vision DSPs, and advanced AI algorithms for applications across smartphones, tablets, automotive, surveillance cameras and many more edge AI systems.
We combine cutting-edge hardware IP design with embedded software, SW development tools, and system-level solutions, enabling the next generation of intelligent and energy-efficient devices.
About the Role
As the R&D Technical Project Manager you will lead large-scale, multidisciplinary programs developing Neural Network Processor (NPU) and AI IP products.
You will lead cross-functional engineering teams (30+ engineers) across architecture, algorithms, VLSI, embedded software, SW tools, and systems, ensuring seamless execution from Product Requirements Document (PRD) to production-ready delivery for leading SoC makers and OEM customers.
You will also act as the primary interface between customers, product team , and internal R&D teams , balancing technical leadership, strategic execution, and organizational excellence in a global, multi-site environment.
Responsibilities
Be accountable for successful planning and delivery, in quality, on time under budget, of all R&D programs & products in the AI Division, covering architecture, algorithms, hardware IP, embedded software, SW tools, and integration tasks.
Accountable for the definition and deployment of the AI Division Program development processes.
Program Planning and Execution
Own end-to-end planning and delivery for complex AI processor IP programs that include hardware design, embedded SW, and SW development tools.
Build and manage integrated schedules across RTL, verification, firmware, drivers, toolchain, and documentation.
Drive execution across concepts, design, verification, bring-up, and customer release phases.
Manage program POR, program priorities, and change control process.
Track dependencies between IP architecture, firmware readiness, SDK/toolchain availability, and customer enablement deliverables.
Define and manage program KPIs: schedule, quality metrics, and customer satisfaction.
Cross-Functional Leadership
Lead multi-disciplinary engineering teams: HW design, verification, modeling, firmware, SDK/toolchain, and system validation.
Collaborate closely with product management, system architects, and customer engineering to align technical priorities and milestones.
Coordinate with silicon partners and customers to support IP integration and bring-up.
Lead multi-site coordination, fostering collaboration among teams located across various global R&D centers.
Cultivate a high-performing R&D culture emphasizing technical innovation, accountability, and cross-functional synergy.
Software and Toolchain Integration
Ensure alignment between hardware deliverables and the corresponding software development environment.
Oversee development and release of software tools such as SDKs, compilers, profilers, debuggers, simulators, and evaluation platforms.
Manage embedded firmware releases, drivers, Graph compiler, etc….
Risk and Quality Management
Identify technical and schedule risks early, drive mitigation and contingency plans.
Ensure design readiness and quality reviews across hardware, software, and tools.
Ensure proper configuration management and version control across IP and SW components.
Ensure quality documentation.
Communication & clarity and Stakeholder Management
Act as the central communication hub across engineering, marketing, operations, and customers.
Provide regular executive-level status reports with clear visibility into risks, dependencies, and critical paths.
Program management processes
Oversee the related PMO activities to ensure definition, deployment, and continuous improvement of program management processes across the organization.
Advantage
This role provides a rare opportunity to lead breakthrough AI processor R&D from concept to silicon and system delivery - integrating cutting-edge ML algorithms, hardware IP, and software stacks for global customers. You will play a critical role in shaping the future of Generative AI and Neural Network computing at Ceva.

Ceva is the leader in innovative silicon and software IP solutions
that enable smart edge products to connect, sense, and infer data more reliably and efficiently.
At Ceva, we are passionate about the smart edge. Providing the technology and market expertise our customers need to be successful is what we do best, and we’ve been doing it for over 30 years.
With the industry’s only portfolio of comprehensive communications and scalable edge AI IP, Ceva powers the connectivity, sensing, and inference in today’s most advanced smart edge products across consumer IoT, mobile, automotive, infrastructure, industrial, and personal computing. More than 18 billion of the world’s most innovative smart edge products from smartphones to drones to cellular base stations and more are powered by Ceva.
We create innovative technologies that help our customers turn great ideas into extraordinary products. We license our portfolio of wireless communications and scalable edge AI IP to our customers, breaking down barriers to entry and enabling them to bring new cutting-edge products to market faster, more reliably, efficiently, and economically.
Headquartered in Rockville, Maryland, Ceva has more than 400 employees worldwide, with design centers in Israel, Ireland, France, United Kingdom, United States and sales and support offices located in Europe, the U.S. and throughout Asia. To date, more than 17 billion CEVA-powered chips have been shipped worldwide, for a wide range of diverse end markets.
Ceva was created through the combination of the DSP IP licensing division of DSP Group, Inc. and Parthus Technologies plc (“Parthus”) in November 2002.