Like many other big tech companies, Apple has officially announced its entrance into the generative AI race. At Apple’s worldwide developer conference, WWDC 2024, the company introduced Apple Intelligence — Apple’s method of weaving AI into the iPhone, iPad and Mac’s operating systems. iOS 18 beta users can’t access Apple’s AI just yet but over the next year, users will be able to let AI rewrite or summarize paragraphs of text, generate images and even let users create custom emoji based on a short prompt, like a dinosaur on a surfboard wearing a tutu.

Almost as important as surfing dinosaurs are the privacy features Apple announced. For years now, Apple has touted itself as the privacy-conscious tech company and it's clear the company hopes the same will be true for its AI era. Apple claims that many queries will happen locally on-device — this method is more privacy preserving because your interactions with AI won’t travel through the internet and end up in the hands of a big tech company like Apple. If a query does leave your device, Apple says it will only ever live on a very secret, secure section of its cloud. (Apple calls this feature “Private Cloud Compute.”) If it's a question that Apple’s AI tools struggle with even further, your phone will ask you if it can kick the question over to ChatGPT.

iOS 18 beta users may miss out on the fun but what will the full release of the 18th update have in store? How does Apple’s AI offering stack up in terms of privacy? Is Apple doing all the right things when it comes to limiting how much personal information that’s out there about AI users? What else could Apple be doing to increase its users’ level of privacy? We asked around Mozilla Foundation for folks’ thoughts:

Jen Caltrider, Program Director of Mozilla’s *Privacy Not Included

Apple entering the AI field was inevitable. Being the maker of Siri, it just makes sense that they would want to up their game with generative AI. As one of the biggest companies in the world, hopefully Apple can use their power and reputation to help build better, more ethical generative AI applications. That part remains to be seen.

We still can’t be sure if Apple is doing things the right way or not. They do make some promises that sound good and I hope they keep them. But, the fact remains that AI must collect a lot of data to work properly. The more they know about you, the more useful they are. And when that personal information leaves your device and, thus, your control, you’re expected to trust that Apple (or ChatGPT or who knows who else down the road) will secure it, protect it, and respect it. That is a lot of trust in a company that, while better than many, is still always looking out more for its bottom line than your privacy and safety.

Why haven’t we seen generative AI tools that attempt to teach users how to best protect their privacy? And I don’t mean an annoying list of tips to follow. Instead, how about an AI that explains to users what happens when you share your personal info with a chatbot, or even a chatbot that doesn’t retain personal info in the first place? So many users are learning how to use these tools right now. Most tips are, “how to get the best answers” or “the most productivity.” I have yet to see an AI chatbot designed to help a user protect their privacy. Now that would be really cool.

As for Apple’s AI tools, whether or not the company can improve upon the privacy of Apple Intelligence is up in the air. Apple’s updated privacy policies and terms and conditions will be telling as the company adds generative AI to more of its products. I sure hope Apple makes giving consent super clear, makes opting out extremely easy, lets users delete their data with ease and that the company really thinks through what AI models they decide to include in their products.

Claire Pershan, Mozilla Foundation Advocacy Lead, EU

I struggle to keep up with the wave of AI-related changes to consumer tech products all around me — not just as an advocate for trustworthy AI but simply as a consumer! In some parts of the world, Apple products are so widely used that they’re integral to modern life. With that in mind, amongst the flood of AI announcements these days, Apple’s announcement feels like an especially big one.

Historically, Apple has been really strong on the privacy front. That’s their thing and, frankly, it’s what consumers deserve. Where Apple lags behind is how easily experts can audit its products and overall transparency — put another way, how many open source Apple apps can you name? (Editor’s note: Open source meaning software that lets anyone figuratively pop the hood and poke around.) While it sounds like the company is still committed to privacy andis touting some interesting new strategies, Apple is moving into new territory with this announcement. Apple’s AI offering will put more of our data in the cloud which inherently makes upholding a commitment to privacy and security more challenging. Apple’s famous lack of transparency and openness means we’re expected to just trust them on this.

At the very least, Apple is making an effort to improve auditability with what the company calls “Verifiable Transparency.” This could be huge for Apple’s AI efforts and maybe huge for secure processing of data in the cloud more generally. The company is even making some progress in regards to being more open. Apple recently published new ML models and datasets on the AI community platform Hugging Face — a much more collaborative and open source approach to creating AI.

But there is still more Apple could be doing. Trust is good but control is better. Consumers should have a clear and confident say in when and how we use generative AI tools and how our data gets used for the sake of AI. Consumers should also demand that Apple be even more transparent and open about these new AI integrations. Transparency and openness are not just theoretically important — they’re fundamental when it comes to consumer security and protection. I’d like to see Apple collaborate even more with the open source community.

I’d also like to see Apple stop urging people to buy new hardware every year, but that’s another story.

Nik Marda, Technical Lead, AI Governance at Mozilla Foundation

So far, many consumer AI products have required users to go seek them out in some way. Apple’s AI solution will bring AI directly to many consumers. That’s a large step toward harnessing AI’s potential but it’ll be crucial to ensure that these AI integrations are safe and private for everyone.

Apple has clearly thought a lot about privacy, while grappling with running a complex AI system directly on your iPhone. The compromise they engineered means that some of your data will have to go to their fancier servers in the cloud in order to use Apple’s AI. It will take some time to figure out just how secure Apple’s “private cloud compute” is or how externally verifiable it is. Even though Apple designed it carefully, it seems to stop short of being truly open source to help with proper vetting of its security by external experts.

I’m excited to see Apple focus on advancing on-device AI solutions. Apple’s approach seems well designed although some of your data will leave your phone, which adds some risk. One of our Mozilla Builders projects, Llamafile, makes it possible to run advanced AI models with a single executable file on a local device. I’d like to see approaches like this be used and expanded over time so that more AI applications run locally instead of needing the cloud.

Apple’s AI Offering Is Nearly Here And Touts Privacy Features. Here’s The Good, Bad & The Ugly

Written By: Xavier Harding

Edited By: Audrey Hingle, Kevin Zawacki, Tracy Kariuki

Art By: Shannon Zepeda


Maudhui yanayohusiana