Meta

QA Engineering Lead, AI Native

Meta  •  Menlo Park, CA (Onsite)  •  17 days ago
Apply
AI can make mistakes so check important info. Chat history is never stored.
82
AI Success™

Job Description

Meta is seeking a QA Engineering Lead with expertise in AI product and model testing to drive the quality vision for our next-generation AI-powered products. In this role, you will lead the design and execution of comprehensive test strategies for AI models spanning text, image, and voice, ensuring our solutions are robust, reliable, and ethically sound. You will work on products built with cutting-edge technology, serving billions of users worldwide, and play a pivotal role in shaping the future of AI quality at Meta.

Responsibilities
Build and foster a quality-driven engineering environment that enables rapid, confident product releases, ensuring that quality is embedded throughout the development lifecycle
* Develop and implement robust evaluation processes for AI models, including prompt engineering, scenario-based, and adversarial testing for text, image, and voice AI systems
* Drive the quality for products and features, assess risks, and ensure features ship with a high quality bar, balancing speed and experience
* Plan, develop, and execute comprehensive test strategies across core Meta products and platforms, leveraging both manual and automated approaches
* Lead quality assurance efforts that align with product objectives, developing scalable solutions to support rapid product iteration and deployment
* Solve cross-platform engineering challenges and contribute impactful ideas to improve quality, reliability, and user experience across diverse product surfaces
* Implement and evolve QA processes to obtain effective test signals and scale testing efforts across multiple products, ensuring continuous improvement
* Define quality metrics and implement measurements to determine test effectiveness, testing efficiency, and overall product quality, using data-driven insights to guide decisions
* Partner with engineering and infrastructure teams to leverage automation for scalable solutions, preventing regressions and ensuring the reliability of products and AI models
* Apply Responsible AI practices including safety, ethics, alignment, and explainability by building safeguards and quality controls to validate AI outputs, ensuring transparency, and compliance with ethical standards

Qualifications
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
* 5+ years of experience in quality assurance, test engineering, and test automation
* 1+ years of hands-on experience testing AI-powered products (web, iOS, and/or Android) that generate or transform text, images, and/or voice, including end-to-end feature validation and user experience quality
* 1+ years of hands-on experience testing, debugging, and evaluating LLM/multimodal model behavior, including defining and applying quality standards for accuracy, relevance, grounding, safety/policy compliance, and cultural/locale sensitivity, and driving model-quality regressions to resolution
* Experience effectively utilizing AI technologies and tools (e.g., large language models, agents, etc.) to enhance QA workflows
* Experience collaborating cross-functionally and contributing to technical decisions through influence, communication, and execution
* Experience changing priorities quickly and adapt effectively in a fast-moving product development cycle Experience in Python, PHP, Java, C/C++, or equivalent programming language
* Experience leading and executing black-box and white-box testing strategies (test planning, coverage, execution, and triage)
* Experience partnering with AI/ML research and engineering teams, and communicating effectively with technical and non-technical stakeholders at multiple levels
* Experience building AI-assisted test automation/test agents using LLMs and agent frameworks (e.g., internal or industry tools) to generate, execute, and maintain tests
* Experience using analytics to define, measure, and improve QA operational KPIs (e.g., defect escape rate, detection latency, automation coverage, flake rate)
* Experience designing and building test automation frameworks that leverage generative AI for test creation, prioritization, and maintenance
Meta

About Meta

Meta's mission is to build the future of human connection and the technology that makes it possible.

Our technologies help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology.

To help create a safe and respectful online space, we encourage constructive conversations on this page. Please note the following:

• Start with an open mind. Whether you agree or disagree, engage with empathy.

• Comments violating our Community Standards will be removed or hidden. Please treat everybody with respect.

• Keep it constructive. Use your interactions here to learn about and grow your understanding of others.

• Our moderators are here to uphold these guidelines for the benefit of everyone, every day.

• If you are seeking support for issues related to your Facebook account, please reference our Help Center (https://www.facebook.com/help) or Help Community (https://www.facebook.com/help/community).

For a full listing of our jobs, visit https://www.metacareers.com

Industry
IT & Software
Company Size
10,000+ employees
Headquarters
Menlo Park, CA
Year Founded
2004
Website
meta.com
Social Media