Tech giants and new startups are betting big on generative AI. Apps like ChatGPT, Google’s Gemini, and Microsoft Copilot are powerful: they can quickly write paragraphs, summarize long walls of text and create images. But there are also downsides: AI hallucinates facts, makes it easy to generate convincing misinformation, and carries a climate cost. And then there are the privacy implications.

In the past, Mozilla’s *Privacy Not Included experts have studied the privacy of popular apps, cars and even “mature” toys. For their next venture, the group is studying ChatGPT and other generative AI products. The *PNI team is knee-deep in research and, even though the guide isn’t out yet, they’ve noticed a few recurring storylines. Here’s what stands out most to them.

AI Apps Privacy Worry #1: Constant Surveillance

Big tech companies are eager to include AI in their most popular products. Facebook, for example, brags that its smart glasses users can simply look at signs in a foreign language and translate them with ease. The idea of “AI but make it glasses” has its perks, but comes with privacy worries.

“Wearable tech like Google Glass or more recently Meta’s Ray Ban sunglasses are always on the cusp of becoming popular,” says Jen Caltrider, *Privacy Not Included program director. “The problem is many useful features require constant listening and constant video processing. The ability to be in a foreign country and have a conversation with anyone or even the idea of walking down the street and asking your glasses ‘who is this person, I forget their name’ is useful but also terrifying.” Content creator on the *Privacy Not Included team, Zoë MacDonald, points to reports of Harvard students showing just how possible this is right now. “We could soon live in a world where everyone is a surveillance drone, not just with phones in our pockets but with recording devices on our faces,” says Jen.

AI App Privacy Worry #2: The Privacy “Gray” Area

You can’t always expect that a company is doing everything it can to protect your privacy. Sometimes, the things we assume are private might not be.

*Privacy Not Included content creator Zoë MacDonald points out a disappointing truth. “Always check the fine print on things like DMs,” says Zoë. “I’m always looking for how companies handle information that lives in a gray area. Not necessarily personal information, but information you might expect to be private. Direct Messages are a perfect example.” In the AI world, the “gray area” data that concerns the *Privacy Not Included team is prompt information. “Think of the information you’d enter into a chatbot or photos or documents you’d upload. It’s surprisingly difficult to find out how that information is processed, where it’s stored, or where it’s shared.” Companies may bury this in their privacy policies and terms of service agreements, meaning it’s not often easily accessible to users — which is often by design.

AI App Privacy Worry #3: Transparency Overload

In some ways, AI applications like ChatGPT, Copilot and others aren’t very transparent about how they work. In other ways, they may be a little too transparent — at least when it comes to understanding the privacy you’re entitled to on each platform.

“From a consumer perspective, it can be difficult to understand what’s going on with these AI models,” says Jen. “ChatGPT, for example, has 18 privacy documentation links consisting of privacy policies, usage policies, terms of use documents and more.” Jen also points out the two pieces of documentation popular in the AI-scape. One is a model card, or a white paper, essentially, for a specific machine learning model. The other is a system card, or a page explaining how a group of machine learning models, and even non-AI software, work together in an AI system. All of this information surrounding an AI product can prove useful but also dizzying. As Jen puts it, “I’m more confused than when I started!”

AI Is Moving Fast, With Little Time To Audit

Jen and Zoë share similar worries when it comes to AI products and your privacy: by the time the team analyzes one AI product, there’s a new update that sends them back to the drawing board. Then there are the ethical questions. Some AI models have been trained on information pulled from across the internet. Usually without permission (“Too often!” says Jen) — it’s why publications like The New York Times are suing OpenAI. Despite lacking proper permission, AI companies continue to ship, promote and iterate on their products.

Constantly changing AI models and ethical questions around training are just two potential problem areas. A third is hallucinations and worries over AI mistakenly, but confidently, lying. Zoë notes Microsoft’s Copilot has cracked the code with a solution for this: a reminder in fine print at the bottom of its page telling the user to “check for mistakes.”

“My question is, where do I check for mistakes?” asks Jen. “Am I supposed to Google it? Go to another AI and ask? It’s almost farcical that they bill themselves as useful while also admitting that they’re potential BS artists.”

All three issues underscore the importance of proper AI auditing. Letting experts examine the in’s and out’s of AI systems could ensure that these tools benefit society more than they harm it. Creating AI worth trusting will require a lot more than end users “checking for mistakes.”

AI & User Privacy: Here’s What You Need To Know

Written By: Xavier Harding

Edited By: Audrey Hingle, Jen Caltrider, Tracy Kariuki, Zoë MacDonald

Art By: Shannon Zepeda


Conteúdo relacionado