Today’s sophisticated digital ads surveil, marginalise, and have serious environmental impacts. How can we build a better online ecosystem?


Harriet Kingaby is a 2019-2020 Mozilla Fellow.

If you’re working on any of the topics outlined in this article, please get in touch at [email protected]. I am also running workshops in London, Delhi and Singapore. If you’re interested in attending, please let me know.


In environmental science, the ‘Tragedy of the Commons’ is a concept that concerns shared resources (such as a piece of ‘common’ land, or the air we breathe), which cannot be regulated by conscience and altruism alone. The theory goes that some individual users will always spoil or exploit the resource by acting in their own self-interest, contrary to the common good of all users. These ‘free-riders’ create costs for others, which must be discouraged through intervention. This theory underpins much environmental regulation, from preventing companies from dumping waste into streams and rivers, to encouraging cycling and walking to improve air quality.

The internet is also a shared resource, and the same patterns that are seen in the real world apply online, too. Here, pollution occurs via an unscrupulous digital advertising ecosystem: big platforms dependent on an advertising business model exploit and twist the development of the online commons to prioritise commercial interest. Information evolves to suit advertising better, and fraudsters or megalomaniacs game the system. This commercial bias prioritises online content designed to evoke a quick reaction, or interaction, and that which has the ability to reach millions. The result? Journalism that reads more like clickbait, and fake news sites which spread misinformation like wildfire. Unethical organisations engineer ad fraud through false clicks and bots, which supplements organised crime to the tune of billions.

The consequences of this online pollution are far-reaching, from the decline of quality journalism to new types of discrimination. Not only has this bias distorted the quality of information we find online, but it has also degraded user experience. A range of issues, from data privacy concerns, to infuriating pop ups and ad formats, have lead to ad-blocking so prevalent that 48% of 16-34 year olds now use the software. But as people fight to exert control over their internet experience, they may be unwittingly exacerbating its decline - adblockers deprive already stretched publishers of precious revenue that affects the sustainability of quality journalism sites.

The consequences of this online pollution are far-reaching, from the decline of quality journalism to new types of discrimination.

--

The current system could also be making it harder for marginalised communities to have their voices heard. Crude ‘block-lists’ (lists of key words used to ensure ads don’t appear next to unsuitable content) are demonetising content from some communities altogether, causing minority publications to close, or seek alternative funding models. Outvertising’s Jerry Daykin and Christopher Kenna point out that 73% of safe LGBTQ+ content is rendered unmonetisable under current blocklists, while a recent Vice investigation showed that keyword exclusion lists include generic terms like ‘lesbian’ or ‘muslim’ more often than terms such as ‘murder’. Citing ‘brand safety’, many brands are re-drawing the lines around what they will and won’t be seen next to, sometimes with dire consequences for minority group publications and hard news. As Kenna puts it- “Optimising the internet for a straight, white audience.” Even brands who have strong strategies around diversity and inclusion, are struggling to ensure the placement of their advertising delivers on their strategic priorities. Tools which allow them to quantify or measure the holistic harms and benefits of their advertising are lacking and need to be developed.

And it’s not only the online world which suffers. Greenpeace estimates that the global IT industry has a carbon footprint comparable to the aviation sector, just slightly less than the USA or China. Every spam email, or autoplay video ad, or misplaced ad on a site serving climate change denial contributes to the ever increasing amount of carbon dioxide in the atmosphere. Increased use of machine learning to help us tailor and target these adverts may well place an additional load on the system, unless countered with technology that helps to optimise targeting and user experience, and combat fraud. OpenAI recently reported that “Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month doubling time.” All of this feels extremely problematic for companies who are increasingly being held to account for inconsistencies between brand and behaviour, including corporate values, purpose, and sustainability targets. They, and the consumer groups which hold them to account, need better tools to help them assess risk and identify inconsistencies before implementing technology or techniques.

To top this all off, many people face these challenges without adequate legal frameworks to protect their human rights, the environment or personal data. And 43% of the world’s population does not even have online access; onboarding them into this online world as ‘newbies’ in its current state could create challenges and opportunities not yet even considered by governments, brands or civil society. As we stand on the brink of an AI revolution, where smart cities, AR, facial recognition, voice controlled devices and machine learning will change the shape of the online world, and the digital advertising that funds it, we have to ask ourselves - what should the future of the internet look like? And what role can advertisers play in making that vision a reality?

Fortunately, conversations about reimagining the internet are already in full swing. Tristan Harris of the Centre for Humane technology argues for a ‘downgrading tax’ to make business models based on addiction and other harmful behaviours untenable. Sleeping Giant co-founder, Nandini Jammi, highlights the need for brand safety to evolve way beyond current accepted criteria, and for tech platform CEOs to re-evaluate their stance on ‘freedom of speech’. Element AI is calling for a ‘human rights approach’ to designing and implementing AI in business.

Still more are working on the current advertising ecosystem: The Conscious Advertising Network has created manifestos which its members use to mitigate the inadvertent funding of hate speech, protect children’s welfare, and tackle ad fraud. CAN members Good Loop create ‘opt-in’ advertisements that benefit charities, and Fenestra’s powerful Blockchain technology helps advertisers detect and mitigate fraud in real time. Meanwhile, Mozilla Fellow Richard Whitt, founder of GLIAnet is developing a ‘personal AI’ to help us manage and broker our online experience. However, all of this needs to be supported by brands with responsible advertising strategies, that seek to maximise the good ad money can do, rather than simply minimising the bad.

With a more proactive approach, the future could be incredibly bright. Advertising funds large swathes of the internet, and much of the great content, quality journalism and creative work we see online is possible due to this revenue. Emarketer estimated that $330bn was spent on digital advertising in 2019, rising to $385bn in 2020. What if we re-imagined this money as a resource which could be used to rebuild the internet? How would we spend it: funding quality, challenging journalism? Rewarding platforms which promote debate, or genuine connection? Ensuring that minorities have a voice, and that creators get the exposure and revenue they need to thrive?

Obviously, I’m not talking about an online world where corporate money and interest completely dictate the kind of content we see online. Alternative technologies such as micropayments based on attention, subscriptions, and other alternative funding models can also ensure the existence of diverse, challenging and necessary content. But thought about how and where we spend our advertising money would create dividends for brands, consumers and society. Creating more meaningful relationships, not to mention helping to rebuild the failing internet, in the same way that proactive initiatives by brands with money have boosted the sustainable cotton or renewable energy markets and detoxed denim production.

My work: minimising harm and maximising benefits

To do this, we first need to understand the potential harms and benefits of our advertising. My research focuses specifically on AI enhanced advertising, so let’s start there. Targeting techniques which leverage machine learning, for example, might provide a more accurately personalised advertising experience for our customers, but use huge amounts of energy. A new campaign that combines facial recognition with ‘digital out of home’ might win awards, but cross the line in terms of personal privacy for some. We’ve already seen the kind of controversy which can be stirred up when brands cross the line, Burger King’s 2017 Google Assistant hijack was ingenious, but too intrusive for some.

The question is, how can we balance these potential harms and benefits, to do more good than harm, if we want our advertising money to both be more effective and have a ‘net positive impact’ on the world?

A net positive system ‘gives back to nature and society more than it takes’ over its life cycle, ensuring that the “pursuit of short-term profit is not at the expense of human rights, democracy, scientific fact or public safety” in the words of Tim Berners-Lee. Net positive cannot be achieved for AI enhanced advertising via the current system, which, all too often, relies on dubious consent mechanisms, interruption and data harvesting. Instead, we must think about how and where we deploy our AI, and the design principles for doing so. My research focuses on the development of a benefits and harms matrix which will help organisations from brands to consumer groups to consider both potential benefits and harms in balance.

A harms and benefits matrix will advertisers make better decisions about whether an AI enhanced advertising technology or technique will create a net positive or a net negative impact for their brand and consumers. The potential benefits of AI technology are well communicated by the industry. However, there needs to be more joined up thinking to help a marketer identify whether a job advert campaign on a social platform, optimised by machine learning, may reach more people, but inadvertently contravene their diversity strategy. Or whether they should ask about renewable energy policy as a key priority when taking on a new supplier of AI technology, particularly if their audience is Gen Z, or their brand has science based carbon targets. Understanding cultural and market variance is important too. AI might be great for optimising content for a particular audience in India for example, but the brand should be aware of digital literacy issues, to ensure they’re not being exploitative.

On the flip side, transparency around harms and benefits can increase accountability. Consumer groups are increasingly engaging with debates around both advertising and AI, and are critical of the role played by private companies in ensuring consumer protections in their development. A tool such as this would allow them to ask informed questions of brands using this technology, and to challenge where appropriate.

Key to embedding these changes is a mindset shift, much like the one that has happened at purpose driven corporations such as Unilever, or in the BCorp movement, where businesses see themselves as part of something bigger. Stewards of a future in which we can all thrive, with access to an internet that’s open, diverse and accessible. Online advertising funds the internet, and with that great power, comes great responsibility. The digital brands of the 21st Century must embrace this responsibility, interrogating overblown promises around new technology. Implementing AI and advertising technology in ways which considers humans, the internet and the environment, to build a better online future for all.


Related content