mozilla logo

Mozilla Insights

The Insights team researches and develops recommendations on Trustworthy AI topics with a focus on Transparency, Data Governance and Equity fields. We highlight what can be done and who is working towards a healthier internet. Mozilla Insights is led by Kasia Odrozek and includes Solana Larsen (Editor), Kenrya Rankin (Editor), Stefan Baack (Research), Max Gahntz (Policy Research), Ramak Molavi (Research), Neha Ravella (PM) and Eeva Moore (Engagement).

Latest research

  • AIIDB thumbnail

    AI Intersections Database

    April 18, 2024
    AI bias & discrimination / AI fairness, accountability, and transparency / Community building / Digital inclusion / LGBTQIA+

    The AI Intersections Database maps intersections between the key social justice and human rights areas of our time and documented AI impacts and their manifestations in society.

  • Common-Crawl-Spider

    Training Data for the Price of a Sandwich: Common Crawl’s Impact on Generative AI

    Feb. 6, 2024
    AI bias & discrimination / AI fairness, accountability, and transparency

    Mozilla finds that Common Crawl's outsized role in the generative AI boom has improved transparency and competition, but is also contributing to biased and opaque generative AI models.

  • AI Transparency in Practice Report

    AI Transparency in Practice

    March 20, 2023

    AI Transparency in Practice Builders on what works — and what doesn't. Mozilla’s research on AI transparency (lead by Ramak Molavi Vasse'i), with practical advice from Thoughtworks.

Browse all projects (10)