Job Description
About the Team
The AIGC Safety team is at the forefront of protecting TikTok's community from AI-generated misinformation and harmful content. Our mission is to accurately detect AI-Generated Content, disclose it transparently to users, and moderate violative content that threatens platform integrity. As Generative AI makes creating realistic content more accessible, ensuring safe consumption of AIGC has never been more critical.
Responsibilities:
- Own the end-to-end product strategy and roadmap for AIGC Detection, Disclosure, and Moderation within TikTok's Integrity & Authenticity team.
- Drive improvements to AIGC detection models across video, photo, and audio—addressing current gaps in model coverage, recall, and precision.
- Enhance AIGC labeling and disclosure products to improve transparency
- Understand best of class AI-Generated content disclosure signals including metadata and watermarking
- Build recall strategies that proactively identify harmful AIGC before it reaches users
- Collaborate with ML, Policy, Operations, and regional partners to integrate AIGC signals into moderation key systems and workflows
- Navigate regulatory requirements (EU AI Act, DSA, US state laws) and represent TikTok's AIGC safety commitment externally.