Consumer demand and market incentives alone will not produce technology that fully respects the needs of individuals and society. To improve the trustworthy AI landscape, we will also need policymakers to adopt a clear, socially and technically grounded vision for regulating artificial intelligence.

Many governments are already working toward building more effective regulatory approaches, including rich conversations happening in Europe. The momentum happening in this space all over the world is promising. But so is a growing list of questions about whether emerging policy visions will a) address the intertwined challenges like bias, privacy, and the centralization of control in AI and b) provide a practical approach that can be put into action.

Through Mozilla’s Fellowship program, we aim to empower people and bold ideas that can shape a more human-centered internet. Over the past year, many of our fellows have been focused on the policy frameworks needed for the development of a trusted AI ecosystem. The spotlights below highlight the work of some of our fellows in that area:

Chenai Chair
Chenai Chair | South Africa

Chenai is a fellow based in South Africa. Through her fellowship, Chenai sought to assess the adequacy of privacy and data protection with the uptake of Artificial Intelligence from a gender perspective. The project used a feminist framework to understand this because of the existent gender inequalities in the South African societies and the ways in which AI based innovation may impact on this.

The project also focused on bringing the societal concerns of AI beyond the economic and developmental discourse in South Africa. All of this is important because it provides evidence on the concerns for women, gender diverse people and sexual minorities with the uptake of AI on privacy and data protections in the digital rights context.

Chenai’s impact, in her own words:

“Taking on a feminist lens to the issues raised has led me to a community of people who do not ask why gender but rather how do we critically engage with technology in a context of uneven power dynamics for a transformative society.”

Coverage of Chenai’s work:
How you can take action with Chenai:

Go to mydatarights.africa to: share examples you’ve come across; give feedback on the project; and learn about other opportunities to collaborate and contribute.

What Chenai is doing next:

I am going to rest as rest is resistance. I will be carrying on with my work focusing on gender, tech and policy working with civil society and policy makers to build awareness on AI, privacy and data protection in the new year.

Oleg Zhilin
Oleg Zhilin | Canada

Through his fellowship, Oleg worked on monitoring the Canadian election to investigate the impact of misinformation, political advertising, and other dynamics on public opinion and the online ecosystem. Since the election took place, he has been researching the impact on Canadians of online phenomena related to the pandemic.

The effects of online messaging on voter’s opinions is not the same everywhere. It’s important to understand these differences to promote sound policy decisions that will help build healthier media ecosystems.

Oleg’s impact, in his own words:

My fellowship work has contributed to seven research memos investigating various issues surrounding the Canadian elections that were released by the Digital Democracy Project. As part of the Media Ecosystem Observatory, my work investigated pandemic-related topics such as cross-partisan consensus, government messaging, and misinformation. Together with other researchers, I am currently working on measuring the prevalence of different perspectives related to vaccine hesitancy and clarifying how online misinformation enters the Canadian online ecosystem from other countries.

Coverage of Oleg’s work:
How you can take action with Oleg:

One of the main focus area’s - and desires - of Oleg’s work is for the online environment to become healthier. In that spirit, taking a bit of time to read content critically before sharing is something we should all get in the habit of doing.

What Oleg is doing next:

He is always looking for ways to integrate AI tools into our process without sacrificing interpretability, so he’ll be doing a lot of work on that front as well.

Karolina Iwanska
Karolina Iwanska | Poland

Karolina is a lawyer at the Panoptykon Foundation. Through her fellowship, Karolina sought to challenge the dominant narrative promoted by the advertising industry that surveillance-driven advertising is the only way to fund online content, examine existing alternatives, and develop policy recommendations aimed at fostering a healthy and privacy-friendly online sphere.

This is important because surveillance-based advertising forces online publishers to compete for clicks, sacrificing our privacy and – increasingly – the quality of the content they produce.The internet without pervasive tracking is possible, but the current dominant market logic doesn’t allow alternatives to flourish and regulation is needed to change the status quo.

Coverage of Karolina’s work:
How you can take action with Karolina:
  1. Read her report “To Track or Not To Track: Towards Privacy-Friendly and Sustainable Advertising.”
  2. Use it as a resource and a tool to challenge anyone who claims that there is no alternative from online tracking.
  3. Contact the online newspaper you support and ask them what they think.
What Karolina is doing next:

Karolina will continue working at Panoptykon Foundation as a lawyer and policy analyst, leading their advocacy on the Digital Service Act package and researching how social media use algorithms to curate, target and recommend content to people.

What change Karolina has seen as a result of the fellowship?

The discussion on online behavioural advertising no longer focuses on its magical workings or how profitable it is for publishers - rather, it increasingly underlines the many pathologies of this system.”

Frederike Kaltheuner
Frederike Kaltheuner | England

Frederike's fellowship project aimed to examine applications of AI systems that classify, judge, and evaluate people’s identities, feelings, and emotions, in order to uncover where and how such technology is currently deployed.

From pseudoscientific claims about AI's ability to detect whether someone is gay or straight, to the emergence of emotion detection technology, one of the most overlooked policy challenges of AI is its use to make assumptions and judgements about who we are, and who we will become. This is much more than an invasion of privacy, it's an existential threat to human autonomy and the ability to explore, develop and express our identities.

Frederike’s impact, in her own words:

In many ways, 2020 has been the year of great AI disillusionment. From research (coming in Jan 2021) on alternative credit scoring applications to a collaboration with Meatspace on AI pseudoscience, snake oil and hype, my aim has been on refocusing public and policy debates away from ‘fixing’ AI, such as reducing bias and improving transparency, towards a better understanding of the problems and questions that AI is fundamentally ill-equipped to solve. My goal has been to translate some of the crucial debates that are taking place in tech and academic circles to a wider audience, especially policy makers and those tasked with regulating emerging technologies like AI.

Coverage of Frederike’s work:
How you can take action with Frederike:

As we find ourselves on the downward slope of the AI hype cycle, this is a unique moment to take stock, to look back and interrogate the underlying causes, dynamics and logics of technological hype.


Related content