How should companies approach AI governance when the laws are still taking shape?
In this episode of Privacy in Practice, Shane Witnov, AI Policy Director at Meta, joins us to share practical insights from years at the intersection of privacy, policy, and technology. He reveals how Meta stays ahead of developing regulations and uses existing privacy frameworks to build robust AI policies that support its safe, ethical use. Whether you're a privacy professional or business leader, this episode is packed with actionable insights that will prepare your organization for the future of AI while protecting your users and brand.
The future of AI is now, but how can you ensure it’s used responsibly while also driving business growth? In this episode of Privacy in Practice, Shane Witnov, AI Policy Director at Meta, provides a behind-the-scenes look at how the company navigates the complex intersection of AI innovation and privacy. Shane reveals how Meta uses its proven privacy frameworks to govern AI at scale and stay ahead of emerging regulations, offering a blueprint that businesses of all sizes can follow. This episode shows that AI governance doesn’t have to be an obstacle; instead, it can be your next strategic advantage.
What You'll Learn:
- Why AI governance doesn’t have to mean starting from scratch.
- Why AI governance can (and should) build on your existing privacy, security, and data use frameworks.
- How to use proven privacy frameworks to govern AI safely.
- Why open-source AI models offer a better privacy solution.
- How to set clear, actionable guidelines for safe AI use without banning existing tools.
- Why staying ahead of state-level AI bills is crucial for protecting your business.
- How to identify AI risks early with red-teaming and practical testing.
- Why transparency isn’t just about labels.
- How to build trust through real-world impact.
- And so much more!
Shane Witnov serves as AI Policy Director at Meta, where he focuses on the intersection of technology, privacy, and public policy, particularly in artificial intelligence. With a background in digital civil liberties and privacy law, he has been instrumental in guiding Meta's approach to AI governance and ethical implementation since joining the company in 2015. His expertise spans privacy compliance, AI ethics, and technological innovation, having previously worked with organizations like the Electronic Frontier Foundation and gaining valuable insights through his experience in law and technology.
Connect with Shane Witnov here: LinkedIn Connect with Kellie du Preez here: LinkedIn Connect with Danie Strachan here: LinkedIn
If you enjoyed this episode, make sure to subscribe, rate, and review it.
Episode Highlights:
[00:06:43] Convergence Over Compliance: Building AI Governance That Scales
Effective AI governance is about more than simply meeting regulatory requirements. Shane explains how Meta's "Convergence-based" approach helps create scalable, user-focused privacy solutions. By prioritizing features based on the value they offer to users globally, rather than tailoring to niche or less-used legal requirements, businesses can build systems that serve both compliance needs and real user benefits. Shane highlights the internal question at Meta: “Are we building a toggle for 5 users or 20% of users?” This distinction is critical in determining whether a control should be globally prioritized or tailored for specific jurisdictions. The takeaway for privacy professionals is clear: don’t waste resources on solutions no one uses; instead, build solutions that provide value now and set your business up for future regulatory developments.
[00:15:45] Why AI Isn’t an Exception: Use the Frameworks You Already Have
Shane cautions against AI exceptionalism, the idea that AI requires entirely new governance structures. Instead, start with existing privacy and risk frameworks, and then layer in AI-specific considerations like robustness, reliability, and appropriate use. He stresses that Meta used its well-established privacy risk processes as the foundation for AI model evaluations and red teaming. This approach, which builds on years of work, offers privacy and compliance teams a practical and cost-effective way to start governing AI while evolving as new risks emerge. The message is clear: don't start from scratch, evolve your existing frameworks to meet the needs of emerging technologies.
[00:26:54] Bans Don’t Work, Clear Guidance Does
Many businesses fear AI's potential risks and react by banning tools like ChatGPT outright. Shane warns that this is a mistake. "If you don’t give guidance, your employees are probably using it anyway," he points out. Rather than banning tools outright, organizations should focus on providing clear, actionable guidelines for acceptable uses. For example, encouraging employees to use AI for internal tasks like summarizing meeting notes or drafting emails is acceptable, while uploading customer data to these platforms is not. This approach empowers employees to use AI safely and responsibly, without stifling productivity or innovation. Whether you’re a privacy officer or a business leader, this segment provides a roadmap for creating clear boundaries and ensuring safe AI use.
[00:34:43] How to Start AI Governance with No Budget and No Team
No team? No budget? No problem! Shane offers a simple, three-step process for small businesses or startups to start implementing AI governance:
- Assign Someone to Oversee AI Use: This doesn’t need to be a full-time role, just someone who can monitor AI developments and risks.
- Run Low-Risk Pilot Programs: Start with non-critical workflows that can benefit from AI, and gradually scale up as you gather insights.
- Test with a Red-Team Mindset: Identify vulnerabilities and risks early on by testing AI tools before fully implementing them.
By following these steps, businesses can take meaningful action without needing large teams or massive budgets. Shane emphasizes that AI governance is about being iterative and thoughtful rather than perfect, which is especially important for smaller organizations working with limited resources.
[00:38:34] Transparency Isn’t Just Labels: It’s Context That Matters
Shane explains how transparency around AI usage is evolving. While labeling AI-generated content is one approach, it often doesn’t align with user concerns. For example, Meta’s attempt to label AI-edited images using metadata standards (like those from Photoshop) led to confusion and frustration among users, who didn’t care about the technical aspects of AI use, they just didn’t want to be misled. This highlights an important lesson for privacy leaders: transparency isn’t about disclosing every instance of AI use; it’s about providing meaningful context that aligns with users' expectations. By focusing on user impact rather than technical disclosures, organizations can build trust and ensure that transparency efforts are both meaningful and effective.
[00:21:00] From Focus Groups to Global Consensus: Listening as a Governance Tool
How do you know if your AI tools align with user values? Ask them. Shane explains how Meta uses a variety of methods, including global focus groups, UX research, and deliberative democracy forums, to gather input from real users about how AI should be governed. These forums, which bring together ordinary users after structured education on ethical dilemmas, often reveal surprising alignment. For example, when presented with challenging questions, 70% of participants reached consensus on issues that initially seemed divisive. The key takeaway for privacy professionals is clear: building real-world input into your governance framework can help ensure that AI tools align with the needs and values of the people who use them.
Episode Resources: