We’re turning our research and advocacy attention to deceitful user interfaces

What is deceptive design?

Deceptive design refers to manipulative online user experiences that steer people toward unintended and potentially harmful decisions to the benefit of an online service. You’ve surely encountered deceptive design at some point - most likely while online shopping, planning a holiday, or reading the news. These tricks range from the irritating nudges to accept website cookies and maze-like barriers to canceling a subscription, to false scarcity claims about products in your online basket or hotel reservations.

There are whole taxonomies illustrating deceptive design practices, compiled by academics, designers, and regulators. [1] But behind playful-sounding terms like “roach motel” or “sludge” lie powerful techniques with proven effectiveness. In the gaming industry, business is built on deceptive design tricks like in-game currency or loot boxes. In other industries, deceptive design can entail a free-trial that leads to an automatic paid subscription, and layers of sludge then make terminating that subscription a nightmare.

There are whole taxonomies illustrating deceptive design practices, compiled by academics, designers, and regulators.


Deceptive design certainly exists in the offline world too. (Who decided to turn the lights off in casinos [2] or put the candy bars at the checkout counter?) But the potential for personalized deception online through data collection and machine learning enhances the risk of harm. Individualized deceptive design practices online that influence our behavior in subtle ways are also more difficult to identify and document, posing a challenge to researchers and regulators.

Deceptive design harms some people more than others. Someone might be especially vulnerable to a design trick due to personal characteristics like age, health, wealth, or digital literacy; due to a temporary circumstance like employment status or lack of time; or due to a recent event like bereavement. And when the more vulnerable are targeted by deceptive design, it’s no coincidence. As the European Consumer Organisation BEUC explains, vulnerability in online commerce is deeply asymmetric and also dynamic: the seller can understand the consumer’s personal profile, potential biases and pressure points, and can construct an environment to exploit these.

A complex challenge with wide ranging harms

The harms of deceptive design practices are wide ranging, from privacy invasion (like when you are nudged into accepting trackers or sharing unnecessary information about yourself), to personalized pricing which is potentially discriminatory. Tricks like “drip pricing” (initially only revealing part of the price) have been shown to have market-altering effects - pushing sellers to compete over misleading headline prices rather than actual prices.

Research by Mozilla has found that the design architecture of operating systems can push people away from choosing their own browser, which has cascading effects related to privacy and cybersecurity, not to mention competition in the industry. We’ve also done research on YouTube’s user controls, which give people a false sense of agency when navigating YouTube’s recommendation algorithm. This research has big implications for regulators, since deceptive design is a common circumvention technique, whereby companies meet a surface level of compliance without committing to meaningful change for end-users.

Speaking of regulation - where are we on that?

Deceptive design sits at the intersection of privacy, consumer welfare, competition, and behavioral science, which makes it difficult to address from a single angle.

Last Spring, the European Data Protection Supervisor released guidelines on deceptive interfaces for social media platforms clarifying potential violations of the EU General Data Protection Regulation (GDPR). Meanwhile, the EU’s Directorate-General for Justice and Consumers, the UK’s Competition and Markets Authority, the U.S. Federal Trade Commission, and a number of other consumer protection agencies and competition authorities have analyzed the effects of various deceptive design practices on individuals, businesses, and markets.

References to deceptive design practices are scattered across the UK Age Appropriate Design Code, the California Privacy Rights Act, the Digital Services and Digital Markets Act, and the notion of subliminal techniques used to manipulate people appears in the proposed EU AI Act. [3] But deceptive design practices are slippery to catch, since it's difficult to prove their intention to deceive. That moment when you made a choice you wouldn’t have otherwise - that moment is really only perceptible to you. Companies that A/B test the effects of different design decisions on user choice might have this kind of evidence, but it’s not in their interest to share their secret sauce. There is also a regulatory challenge beneath the slow work of deceptive design encouraging certain behaviors and entrenching existing preferences over time. This influence is also hard to measure.

What can you do?

First off, if you happen to be a designer, you are in a position of power, since you can choose to build more trustworthy user experiences. Mozilla is providing input as the World Wide Web Foundation’s Tech Policy Lab tackles deceptive design and builds trusted design patterns. If you’re a design teacher, even better - why not offer a course on behavioral science and ethical design? You might also consider joining the Responsible Computer Science Challenge community.

If you work in policy or consumer protection, ensure that existing legislation in your country is able to account for deceptive design practices online. The European Commission is currently updating its consumer protection framework and this is a great place for those in the EU to start.

And everyday internet users: If you see something, say something! Document your experiences with deceptive design and share it with your regulator (likely your consumer protection agency) or with experts. The FTC is increasingly interested in this in the U.S. The Dark Patterns Tipline encourages people to report experiences to them and UXP2 Lab is collecting data for their research corpus. You could also tweet at Harry Brignull, one of the first designers to bring attention to the issue. And of course you should reach out to us at Mozilla. In 2023 and beyond, we’ll be expanding our research and advocacy focus to deceptive design practices online. Get in touch: [email protected].


[1] The term “Dark Patterns” is also sometimes used to describe these techniques, and it does appear in existing legislation. Mozilla and others avoid the term “dark patterns” since it reinforces the association of “dark” and “bad” tied to white supremacy. Further, deceptive design practices often involve bright and flashing colors which makes the term potentially confusing.

[2] Natasha Schüll, author of the book Addiction by Design, argues that some tech companies have borrowed techniques from the gambling industry to keep users engaged.

[3] The EU AI Act proposal specifically aims to address subliminal techniques.