Have you ever been on the receiving end of a bureaucratic process that seemed to defy logic? I’m Mozilla Fellow Richard Whitt, back with part two on The Future of Personal AIs. Last time, we talked about the artificial intelligence (AI) that operates behind our online screens and in our environmental scenes. In this episode of Mozilla Explains, we’re discussing how AI operates in what I call bureaucratic unseens. Watch the episode below, or read on for more.

Bureaucratic unseens are the advanced systems that make decisions about us, usually without our input or even our knowledge, often on behalf of large institutions like the government or large corporations. What jobs we get, what bank loans we may receive, even what government benefits we are eligible to receive – from voting registration to social security payments. Countless such decisions are happening every day, even if we are not aware of them.

Even though they’re presented as helpful or labor-saving, the AIs behind these ‘unseens’ represent large corporations or government agencies, collecting our data, analyzing us and making hugely consequential decisions about our welfare. In the past, human beings were responsible for making these decisions. Today, it’s computer algorithms that rigidly prioritize process, rules and procedure, often without human oversight or judgement. These systems can unintentionally employ highly biased algorithms and flawed data sets. And this all happens without us having any legitimate way to appeal or object.

Last time, we introduced the concept of a Personal AI (PAI), an AI which truly represents our best interests, rather than that of a data-hungry corporation. These AIs can be programmed to represent us, as individual human beings, not data points. So they could protect our privacy and security and identity, all while promoting our best interests.

Personal AIs can also be our trusted agents interacting with bureaucratic unseens — by communicating directly with Institutional AIs on our behalf.

An example might be asking the government or corporate AI to give my PAI the specific reasons why they made a particular decision, like whether or not I was accepted for a loan, interviewed for a job or had my prison sentence lengthened. Then my PAI would be able to ask informed questions, assess the same evidence, query the outcome, and even directly challenge the analytical and data-based rationale for such decisions.

How do we get there from here?

We need the technology, the marketplaces, the standards, and the laws to align, so that the governments and companies can accommodate these new citizen tools.

In particular, we need to work with our elected officials, regulators and government agencies to require a “right to delegate” important decisions about one’s online interactions to a trusted third party, and a “right to query,” using my PAI to question and challenge the institutional AI systems making important decisions about me and my family.

Want to get more in depth on this topic? You can read more about this in my paper Democratizing AI: Ensuring human autonomy over our computational “screens, scenes, and unseens”.