The latest from our blog
-
Insights Aug. 2, 2023
The human decisions that shape generative AI: Who is accountable for what?
In this explainer, we tackle one of the most basic questions: What are some of the key moments of human decision-making in the development of generative AI products? This question forms the basis of our current research investigation at Mozilla to better understand the motivations and values that guide this development process.
Stefan Baack
-
Insights Aug. 1, 2023
Openness & AI: Fostering Innovation & Accountability in the EU’s AI Act
What is open source when it comes to AI? How should regulation treat open source to foster innovation without providing a pink slip to necessary regulatory compliance? What are the contours of the commercial deployment and corresponding liability in open source software?
Mark Surman and Maximilian Gahntz
-
This is not a System Card: Scrutinising Meta’s Transparency Announcements
On June 29, Meta announced transparency and research-related updates for Facebook and Instagram: more details on their ranking system for content in “Feed, Reels, Stories and other surfaces”, improvements to user controls of their feed, and tools for public interest research.
Claire Pershan, Ramak Molavi Vasse'i and Jesse McCrosky
-
Insights July 5, 2023
Advancing AI Accountability in the US
Last month, Mozilla submitted comments on AI accountability to the US National Telecommunications and Information Administration (NTIA), drawing on our experiences from five years of working on the question of what it takes to build trustworthy AI.
Maximilian Gahntz
-
Insights June 14, 2023
Today’s EU AI Act Vote Marks Progress, But Coming Months Are Crucial
As Mozilla has stated before, the AI Act has the potential to make AI across the EU more trustworthy — and the European Parliament vote today marks further progress toward robust safeguards against AI harms.
Mozilla