As a Mozilla Fellow, I’m looking for ways to challenge and subvert inaccurate narratives about AI. Help support my research and join in by taking this survey


There is no shortage of people out there extolling the benefits and promise of AI, and there are almost as many people decrying its abuses and warning about the dystopian futures it could help create. There are also lots of people turning those dystopian predictions into depressingly viable businesses.

As unbridgeable as the gulf may be between the techno-solutionist-utopians and the sceptics, there does seem to be broad agreement that any AI we develop will be significantly safer if it’s developed and deployed in an open, transparent manner (there are exceptions, of course).

Unfortunately, many things get in the way of this open and transparent ideal, with the result that AI systems become opaque. According to Cathy O’Neil, it is this opacity, combined with the scale on which AI systems can operate, that turns them into ‘weapons of math destruction’ when used in sensitive contexts such as healthcare, social welfare, or financial services.

AI can be opaque for a number of reasons: on the technical side, we have the infamous ‘black box’ of deep neural networks; on the legal side, we have companies refusing to release details about systems due to vigorously guarded ‘trade secrets’. Beyond such technical and legal causes of opacity, however, there are more mundane, but no less damaging, ways in which AI systems are made opaque.

Although less technically mystifying and futuristic sounding than the architecture of deep neural nets, and less legally intricate than the debates around trade secrets and public disclosure, some of the most prolific and insidious causes of AI opacity are hype, myths and inaccuracies.

Some of the most prolific and insidious causes of AI opacity are hype, myths and inaccuracies.

--

So how is it that AI becomes opaque through myths, hype and inaccuracies? As an illustration, let’s start with the term ‘artificial intelligence’ itself. The term ‘artificial intelligence’ was first adopted by a group of researchers at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (previously used terms included Alan Turing’s ‘intelligent machinery’).

However, even the participants of the Dartmouth workshop did not agree on the term artificial intelligence. Allen Newell and Herbert Simon, the only two participants who had actually developed a rudimentary ‘thinking machine’ at the time (along with Cliff Shaw), initially preferred the significantly less hype-prone term ‘complex information processing’. In a recent review of Stuart Russell’s new book about the dangers of ‘Superintelligence’, David Leslie wonders “what the fate of AI research might have looked like had Simon and Newell’s handle prevailed.” Indeed, we might ask whether the new European Commission would have proposed a new law on ‘complex information processing’ within the first 100 days of its mandate.

We could also wonder how things might sound if we replaced the term AI with more precise, context-specific phrases: instead of ‘optimising social services with cutting-edge AI’, would we be so enthusiastic about ‘firing human case workers and replacing them with expensive, opaque, proprietary correlation-based decision support systems’?

The terms we use to describe this technology shape our understanding of it. Our conception of the state of technological development is impacted much differently when we say that “an AI system has discovered a new type of drug” compared to the more accurate statement that “a team of researchers have used machine learning to help speed up the discovery of a new type of drug.” We need to ask whose interests are served by overselling this technology, and what the fallout will be when the technology fails to meet those inflated expectations.

Another clear example of an insidious inaccuracy about AI can be found in how we represent it visually. Time and again, we find stories about AI in the media accompanied by bewildering pictures of robots.

Although we could have an interesting philosophical discussion about the precise meaning of the term ‘robot’, I think we can all agree that chatbots certainly don’t need to use keyboards. Often this misrepresentation is stupid in a funny way, like the linked tweet, but there is a far more complex conversation to be had about the gendered and sexualized representation of robots in many of these images (the problem has also been highlighted in voice assistants).

The aim of my Mozilla Fellowship project is to deconstruct and counter these myths, narratives, and representations of AI. In some cases this can involve the straightforward refutation of a piece of incorrect information (“Neural nets work exactly like the human brain”), the refutation of a fallacious argument (“We can accurately infer emotion from facial analysis”), or just pointing to counter examples that undermine ungrounded claims (“Regulation always kills innovation”).

In other cases, things will be more complex. There will never be, for example, one single correct definition of ‘artificial intelligence’, or one perfect term to replace it. At the same time, we can analyse the different meanings, and understand what interests are served by certain usages. There is also no simple way to counter the dodgy robot pictures, but perhaps we can achieve something through documenting them, analysing what they mean, and working with designers and artists to propose alternatives.

There is no shortage of AI hype, myths and inaccuracies out there, so what I want is input about which myths are most harmful in the here and now, and suggestions for how to counter them and who to work with to do so. My aim is not to reinvent the wheel, so if you know of someone already doing good work towards this aim, let me know and I can amplify their work. There are already experienced AI researchers helping to cut through the marketing/misinformation; people exposing the hidden labour and power dynamics behind the glossy storefront of automation; and even guidelines for how reporters can do a better job of informing the world about AI development.

To offer your input into which myths I should focus on, how to tackle them, and who to work with, take a look at this form. You’ll find a list of myths and inaccuracies about AI, and be given the chance to suggest some of your own.


Related content