Mozilla’s multi-award winning podcast, IRL, is back for a seventh season all about people leading the way in building a less extractive and more equitable AI industry, one where you and I come before profit.
Because there’s no taking the human out of the algorithm. A million human decisions and actions shape AI outcomes. From the data labelers who shape how AI interprets and reflects the world, to the trillion dollar companies that shape whose ideas get built, to the platforms and technologies that you and I choose to — or are made to — use. We all shape AI.
That’s why, in Season 7, we wade through the thunderous hype to talk to technologists, regulators, activists, researchers, and artists about how to instill values of trust, safety, and equity in AI.
You’ll hear from data workers who are challenging moneyed employers and builders who are exploring what openness means for large language models like ChatGPT: its risks, its rewards. You’ll hear about how voice recognition tools and datasets can serve the needs of language communities through better data governance practices. And about how you and I can stop being crash test dummies for Big Tech. And we hear from artists navigating a rapidly changing legal and artistic landscape; leveraging it for creativity, profit, and writing their own narratives.
Throughout the season we also get specific about what trustworthy tech looks like: if we mean transparent, we say transparent. If we mean bias mitigation, we say that. Because for us to make better decisions across the AI pipeline, we need to get really specific about what we’re asking for.
I hope you’ll dive in and enjoy this season as much as we enjoyed making it. The first episode is about what it means to give the whole world access to build with large language models. Episodes are released every two weeks, so follow us wherever you get your podcasts, and use your decision making power to help reclaim the internet.