There is so much that has the potential to divide us. Politics. Ethnicity. Ideas. But a close look reveals an important truth: the things that divide us most are the artificial borders we erect, be they ideology, -isms and -obias, or those drawn on a map. Both as a concept and a built reality, artificial intelligence (AI) is poised to further separate us. But it won’t if we reform the ways it’s created and deployed around the world now.

As outlined by the United Nations, human rights are those inherent to all people, regardless of their race, gender, sexuality, nationality, ethnicity, language, religion, or any other status. Those rights specifically include the right to life and liberty, freedom from slavery and torture, freedom of opinion and expression, and the right to work and education. But AI, as pushed by Big Tech, is already violating these rights, with borders playing a major role.

Data workers in Kenya know this all too well. As Time reported last year, generative AI giant Open AI crossed into their country and paid them less than $2 USD per hour to label text and visual examples of hate speech, sexual abuse, and other violence so that its AI systems can detect it. The low-paid workers called viewing and labeling that content “torture” that followed them home each day. And they are not alone.

Ensuring innovation without infringing human rights would represent a greater achievement than anything an AI system is likely to do.

Eliot Bendinelli, Director of the Corporate Exploitation Programme at Privacy International

As AI development leads to unsafe labor conditions and environmental destruction that disproportionately impacts those living in the Global South, we see people leaving their homelands for, quite literally, greener pastures. Climate collapse and lack of (safe) economic opportunity are just a couple of the drivers of mixed migration — cross-border movements of migrants, including trafficking survivors and refugees fleeing conflict and persecution, all with unique legal statuses and vulnerable situations, moving along similar routes via similar means of travel.

But leaving their homes doesn’t end the impact of AI on their lives. In fact, migrants and refugees are often categorized among the most heavily surveilled people in the world, as the governments of the countries they cross into increasingly employ AI to track their movements and scrutinize their fitness for entry. For example, in 2023, the European Commission budgeted €47m to deploy an “automated border surveillance system” at the borders separating Greece from North Macedonia and Albania.

From unpiloted drones to iris scans to facial recognition to lie detector tests and automated decision making about their path toward citizenship, migrants’ first run in with this tech often comes before they even reach a border. And because the AI systems are opaque at their core, they may never know the extent to which their rights were violated or the so-called reasons why — the black box of the algorithms prevents transparency. And that’s if they even make it to their destination; because surveillance is frequently associated with an interruption of their journey, studies show that when they know these systems are being used, refugees are more likely to pick more dangerous, circuitous routes to evade detection, with death often meeting them on the road in lieu of liberation.

While the United Nations makes clear that all of the 150 million-plus people who live outside their home countries as migrants or refugees are “highly vulnerable to racism, xenophobia, and discrimination,” not everyone is treated the same. In some counties, like in the United States where I’m based, migrants and refugees of color face additional challenges due to racism. For example, while Black immigrants make up just 6 percent of the nation’s undocumented immigrant population from 2003 through 2015, they were 10 percent of the immigrants forced into removal proceedings in that period. And a 2023 study found that 35 percent of Black and 31 percent of Latine immigrants in the United States report being discriminated against in public compared to U.S.-born people; just 16 percent of white immigrants experienced poor treatment.

Racism. Xenophobia. Climate Crisis. Exploitative Labor. Surveillance. Immigration. Legal Status. Human Rights. Everything is connected.

Those connections form the core of my research, as we examine how AI impacts our daily lives — and how we can impact that impact. Today, I’m chatting with Eliot Bendinelli (he/him), who serves as director of the Corporate Exploitation Programme at Privacy International, where he works at the intersection of technology and human rights, challenging exploitative use of tech, disrupting Big Tech’s mission to absorb the competition, and examining how AI is deployed in workplaces.

Here, Bendinelli and I expand this discussion to talk about the folly of moving fast and breaking things, why global problems require global solutions, and what would happen if we deleted the datasets used to train AI.

Portrait photograph of Eliot Bendinelli

Eliot Bendinelli

Rankin: Why is it important to you to advocate for a world where AI supports our rights, rather than circumvents them?

Bendinelli: AI is in many ways similar to other technologies that have emerged over the last decade: it processes large amounts of data to produce an output that impacts the real world. As such, it should be submitted to the same rigorous oversight and safeguards we aim to apply to other systems.

The problem is, a small number of corporations are in a position to profoundly shape the development of this technology. They are the same companies that, enjoying a lack of regulation in the early 2000s, created the web 2.0: an internet with closed digital markets they dominate that enabled a series of human rights harms, including vast surveillance programs coupled with a lack of security safeguards that put people at even greater risk, election influence through profiling and targeted political ads, and rising inequality made worse in practice by increased surveillance of often-targeted populations. These were the results of a technology developed in service of the creators’ interests rather than in service of society.

Considering this track record, ensuring AI is developed, deployed, and used in conditions that meaningfully support human rights becomes crucial. Protection and safeguarding of human rights must be built-in by default through strong regulation.

We want to build a future where technology supports, augments, and serves us rather than works against us. If a useful autonomous AI agent in your smartphone is the dream, then it must come with the guarantee that the knowledge it has access to won’t be used to surveil you or limit your ability to exercise your rights.

Rankin: Why is collaboration key to the fight?

Bendinelli: AI is more than ever global tech. The hardware it relies on exists thanks to a globalized supply chain with strong interdependencies. Meanwhile, the data it is trained on comes from massive datasets, including heaps of data scraped from the internet and processed by workers in the Global South. These dependencies call for a globalized and intersectional approach to face the challenges raised by this technology, examining national security policy discourses, data protection and intellectual property questions, power concentration issues, and security and safety risks. Given the complexity of the technology and its infrastructure, such challenges can only be achieved through collaboration and joint efforts. It’s crucial that we ensure AI challenges are addressed across the landscape rather than tackled in a silo.

Rankin: What are some ways AI can support human rights?

Bendinelli: AI is an exciting technology, and we should be free to imagine beneficial use cases and dream of a world where it shifts power and strengthens rights and autonomy. But it’s important to keep in mind the ongoing impact on human rights. Potential benefit must not be a reason to ignore the ongoing exploitation of people’s labor and data that enable its development. Ensuring innovation without infringing human rights would represent a greater achievement than anything an AI system is likely to do. This means growing our ability to build technology collaboratively, while acknowledging and addressing concerns as they are presented rather than running forward and launching potentially harmful systems.

Rankin: If you had a magic wand, what is the first thing you do with it?

Bendinelli: I would delete all the datasets used to train AI (with an exception for community-led projects) and the models that were trained on them. Big Tech companies are consolidating their dominant position on the AI market because they enjoyed a first-mover advantage, scrapping the internet with little regard for data protection and intellectual property law to develop their models. Destroying this advantage and forcing them to start from scratch in a context of active regulations and discussions would oblige them to innovate in a way that’s more respectful of human rights and submitted to greater scrutiny.

Rankin: That’s one way to hit the reset button! Until someone gets that wand in your hand, Business & Human Rights Resource Centre’s Technology Company Dashboards are a good resource for uncovering how the practices of tech companies around the world support — and threaten — human rights.

This post is part of a series that explores how AI impacts communities in partnership with people who appear in our AI Intersections Database (AIIDB). The AIIDB maps the spaces where social justice areas collide with AI impacts, and catalogs the people and organizations working at those intersections. Visit to learn more about the intersection of AI and Human Rights.


Sur le même sujet