A slate of new research and tools by Mozilla Fellow Divij Joshi, titled “AI Observatory,” examines the harms of automated decision making by the Indian government. The project also provides tools and ideas for pushing back


Across the globe, government usage of automated decision making systems (ADMS) is becoming more prevalent -- and more powerful. This is especially true in India, where the federal government is automating decisions with widespread and serious implications, from unjust policing, to surveillance, to discrimination.

Despite the influence of ADMS, there’s little room for citizen oversight. Indians have no meaningful transparency into what decisions are being made, how, or why. And when these systems trample on human rights or constitution rights, Indians have little recourse.

Now, Mozilla Fellow Divij Joshi is publishing work to highlight this problem and to push for change. A slate of new research and tools titled “AI Observatory” unpacks the social, political and technological contexts in which ADMS exists in India; documents the real-world harms that result; and provides tools to mitigate these harms.

The project was launched December 18 at an event hosted by the Centre for Internet and Society, and Hasgeek, Bangalore, titled “Automated Republic: Interrogating Government Use of Automated Decision-Making Systems.” Watch the recording here.

Says Joshi, a Bangalore-based researcher and lawyer: “Despite the influence of ADMS on billions of Indian lives, there is a disturbing lack of recognition or regulation around the systems currently in use. As documented in this toolkit, many decisions consequential to individuals and communities are being delegated to algorithmic systems that post serious concerns related to democratic control, justice and self-determination. The development of these systems is taking place in a regulatory vacuum, resulting in a situation where important considerations of transparency, accountability and democratic control are not given their due regard.”

More about the project’s three parts:

The database. AI Observatory provides a comprehensive catalogue of ADMS currently in use by public agencies in India -- from predictive policing systems in Delhi, to smart city systems in Bengaluru, to welfare algorithms in Telangana, and beyond. In total, the database catalogues more than 70 instances of ADMS.

The harm analysis. AI Observatory unpacks real-world harms across several categories, from surveillance and profiling, to dispossesion, to discrimination. The analysis provides in-depth case studies showing how ADMS can upend real lives, like the extralegal use of facial recognition technology by Indian police; the Samagra Vedika system locking people out of sorely-needed welfare; and the use of Aarogya Setu contact tracing app to make opaque decisions about who can travel where.

Tools to fight back. Joshi identifies two major tools for curbing the harms of automated decision making: transparency and accountability. With these principles enshrined in law, citizens and independent watchdogs can determine if and how ADMS is misfiring. And, agencies and companies can be held accountable for harms that result.

--

More than ever, we need a movement to ensure the internet remains a force for good. Mozilla Fellows are web activists, open-source researchers and scientists, engineers, and technology policy experts who work on the front lines of that movement. Fellows develop new thinking on how to address emerging threats and challenges facing a healthy internet. Learn more at https://foundation.mozilla.org/fellowships/.