A new report by Mozilla Fellow Petra Molnar examines how facial recognition, biosurveillance, and other AI technologies are undermining the rights of people on the move around the world — and what can be done to fix this


Today, more people than ever before are migrating across borders, propelled by violence, economic turmoil, environmental catastrophes, and other issues. Meanwhile, governments are tightening their borders due to concerns about the pandemic and rising sentiments of nationalism and xenophobia.

In this setting, governments are relying on AI-powered technologies at the border. But rather than bringing order, fairness, and dignity to a perilous space, emerging technologies used to “manage” migration often trample human rights instead.

Today, Mozilla Fellow Petra Molnar is publishing new research into the harsh intersection of migration and AI technology. The research delves into the latest developments around the globe, from AI-powered lie detectors and facial recognition at borders, to automated drones surveilling the seas, to invasive biosurveillance measures in refugee camps.

The research is titled “Technological Testing Grounds: Migration Management Experiments and Reflects from the Ground Up.” The publication will supplement a report on racism, digital technologies, and borders by the UN Special Rapporteur on Discrimination and correspond with a panel discussion on November 10 titled “Discrimination at the Border;” RSVP here.

Says Molnar, a lawyer and anthropologist embedded at European Digital Rights as part of her Mozilla Fellowship: “AI technologies intended to ‘manage’ migration frequently make it more arbitrary, unjust, discriminatory, and invasive. People on the move -- an already vulnerable group -- are routinely having their rights violated by these technologies.”

Molnar adds: “Further, the AI deployed at borders is often inscrutable and unaccountable. It functions as a black box, with no insight into how or why it makes life-altering decisions. And people crossing borders often have no way to appeal these decisions.”

AI technologies intended to ‘manage’ migration frequently make it more arbitrary, unjust, discriminatory, and invasive.

Petra Molnar, Mozilla Fellow

An image from Mozilla Fellow Petra Molnar’s field research. Credit: Kenya-Jade Pinto
An image from Mozilla Fellow Petra Molnar’s field research. Credit: Kenya-Jade Pinto

The paper is parsed into four sections: a survey of migration management technologies; an examination of how they violate human rights; the driving forces behind this troubling trend; and concrete recommendations for how policy makers, governments, and the private sector can rein in harmful AI technologies.

Molnar conducted her research over the course of a year, reporting from Belgium and Greece. The research is informed by interviews with dozens of refugees, asylum seekers, migrants without status, and other people on the move. Further, Molnar interviewed dozens of civil society organizations, government and private sector representatives, and academics.

Key findings from the report include:

  • People on the move often encounter AI technologies before they even reach the border. Unpiloted surveillance drones, iris scanning in refugee camps, and social media scraping and cellphone tracking are becoming commonplace.
  • AI technologies are deployed before they’re perfected. Pilot projects in Hungary, Latvia, and Greece moointor migrants’ faces for signs of lying and deploy AI-powered lie detector tests. However, these systems aren’t calibrated to account for cultural differences, trauma, or other factors. Governments and companies capitalize on the lack of oversight at borders to test new technologies that wouldn’t be welcome elsewhere in society.
  • Automated decision making is becoming more common. AI technologies, and not humans, are now sometimes determining whether or not someone receives a visa, or whether or not a person is detained. But it’s not clear who’s responsible for a bad decision: is it the coder who creates the algorithm, the immigration officer using it, or the algorithm itself?
  • A range of human rights are being trampled. Invasive data collection. Automated decision-making systems that perpetuate systemic racism. Algorithms that make decisions without explanation. Surveillance drones that force people into even more dangerous terrain. These are just a few of the human rights being trampled by AI technologies.
  • Profit is an outsized part of the equation. The private sector has a growing -- and lucrative -- role in collecting, using, and storing migration data. Companies like Palantir Technologies have contracts worth tens of millions of dollars with organizations and agencies like the World Food Program and U.S. Immigration and Customs Enforcement (ICE). The result is a mercenary Border Industrial Complex.
  • Read more in the full report»

Key recommendations from the report include:

  • Governments should commit to independent and impartial human rights impact assessments (HRIAs) in which affected communities and civil society are adequately consulted.
  • Governments should adopt binding directives, regulations, and laws for these AI technologies that comply with internationally-protected fundamental human rights obligations.
  • Governments should freeze all further efforts to procure, develop, or adopt any new automated decision-making system technology until these systems fully comply with internationally protected fundamental human rights frameworks.
  • Governments should commit to transparency and report publicly on what technology is being developed and used and why, such as in the form of public registers.
  • Governments should create an independent body to oversee and review all use of existing and proposed automated technologies in migration management.
  • Read more in the full report»


More than ever, we need a movement to ensure the internet remains a force for good. Mozilla Fellows are web activists, open-source researchers and scientists, engineers, and technology policy experts who work on the front lines of that movement. Fellows develop new thinking on how to address emerging threats and challenges facing a healthy internet. Learn more at https://foundation.mozilla.org/fellowships/.