Guest:
- Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics covered:
- Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
- If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
- How would you threat model a system with ML in it or a new ML system you are building?
- What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
- What are the key differences between securing the AI you built and AI you buy or subscribe to?
- Which security tools and frameworks will solve all of these problems for us?
Resources:
- EP135 AI and Security: The Good, the Bad, and the Magical
- Gary McGraw books
- “An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning“ paper
- “What to think about when you’re thinking about securing AI”
- Annotated ML Security bibliography
- Tay bot story (2016)
- “Can you melt eggs?”
- “Microsoft AI researchers accidentally leak 38TB of company data”
- “Random number generator attack”
- “Google's AI Red Team: the ethical hackers making AI safer”
- Introducing Google’s Secure AI Framework
Do you have something cool to share? Some questions? Let us know:
- Web cloud.withgoogle.com/cloudsecurity/podcast
- Mail cloudsecuritypodcast@google.com
- Twitter@CloudSecPodcast