Skip to main content
Dare to Lead with Brené Brown

Dr. Joy Buolamwini on Unmasking AI: My Mission to Protect What Is Human in a World of Machines

85 min episode · 2 min read
·

Episode

85 min

Read time

2 min

Topics

Artificial Intelligence

AI-Generated Summary

Key Takeaways

  • Gender Shades Research: Buolamwini tested commercial facial recognition systems from IBM, Microsoft, and Amazon, finding Microsoft achieved 100% accuracy on lighter-skinned males but only 80% on darker-skinned females, while some systems performed near coin-toss levels at 68% accuracy for darker women.
  • Evocative Audits: Combining algorithmic testing with performance art creates accessible demonstrations of AI bias. Buolamwini's poem "AI, Ain't I a Woman" showed IBM mislabeling Serena Williams as male and Microsoft describing Ida B. Wells as a small boy, making technical failures visceral and undeniable.
  • Creative Rights Framework: Artists and writers whose work trains generative AI systems deserve four protections: consent before use, compensation for contributions, control over how work is deployed, and credit for outputs. This framework addresses unauthorized use of copyrighted books and creative works in AI training datasets.
  • Institutional and Narrative Power: Buolamwini strategically uses MIT credentials for institutional access to policymakers while wielding narrative power through poetry and storytelling. This dual approach enabled her to present research to EU defense ministers, US Congress, and Davos while empowering Brooklyn tenants resisting facial recognition systems.
  • Black Feminist Epistemology in Tech: Lived experience constitutes valid knowledge in AI research. Buolamwini builds on work by Dr. Latanya Sweeney, who exposed racial bias in search engine arrest ads, and Dr. Safiya Noble, who documented racist image results, demonstrating how personal experience drives critical technical scholarship.

What It Covers

Dr. Joy Buolamwini, MIT researcher and founder of the Algorithmic Justice League, discusses her groundbreaking work exposing racial and gender bias in facial recognition AI systems, combining technical research with poetry to advocate for algorithmic justice.

Key Questions Answered

  • Gender Shades Research: Buolamwini tested commercial facial recognition systems from IBM, Microsoft, and Amazon, finding Microsoft achieved 100% accuracy on lighter-skinned males but only 80% on darker-skinned females, while some systems performed near coin-toss levels at 68% accuracy for darker women.
  • Evocative Audits: Combining algorithmic testing with performance art creates accessible demonstrations of AI bias. Buolamwini's poem "AI, Ain't I a Woman" showed IBM mislabeling Serena Williams as male and Microsoft describing Ida B. Wells as a small boy, making technical failures visceral and undeniable.
  • Creative Rights Framework: Artists and writers whose work trains generative AI systems deserve four protections: consent before use, compensation for contributions, control over how work is deployed, and credit for outputs. This framework addresses unauthorized use of copyrighted books and creative works in AI training datasets.
  • Institutional and Narrative Power: Buolamwini strategically uses MIT credentials for institutional access to policymakers while wielding narrative power through poetry and storytelling. This dual approach enabled her to present research to EU defense ministers, US Congress, and Davos while empowering Brooklyn tenants resisting facial recognition systems.
  • Black Feminist Epistemology in Tech: Lived experience constitutes valid knowledge in AI research. Buolamwini builds on work by Dr. Latanya Sweeney, who exposed racial bias in search engine arrest ads, and Dr. Safiya Noble, who documented racist image results, demonstrating how personal experience drives critical technical scholarship.

Notable Moment

Buolamwini discovered facial recognition bias when software failed to detect her dark-skinned face during a MIT project but immediately recognized a white mask held over her face, sparking her career investigating how AI systems perpetuate discrimination against marginalized communities.

Know someone who'd find this useful?

You just read a 3-minute summary of a 82-minute episode.

Get Dare to Lead with Brené Brown summarized like this every Monday — plus up to 2 more podcasts, free.

Pick Your Podcasts — Free

Keep Reading

More from Dare to Lead with Brené Brown

We summarize every new episode. Want them in your inbox?

Similar Episodes

Related episodes from other podcasts

Explore Related Topics

This podcast is featured in Best Mindset Podcasts (2026) — ranked and reviewed with AI summaries.

Read this week's AI & Machine Learning Podcast Insights — cross-podcast analysis updated weekly.

You're clearly into Dare to Lead with Brené Brown.

Every Monday, we deliver AI summaries of the latest episodes from Dare to Lead with Brené Brown and 192+ other podcasts. Free for up to 3 shows.

Start My Monday Digest

No credit card · Unsubscribe anytime