2024 is a year of elections. About 50 countries – approximately half the world’s population - go to the polls this year. 2024 is also a year of generative AI. While ChatGPT was launched back at the end of 2022, this year marks more widespread adoption, more impressive levels of capability, and a deeper understanding of the risks this technology presents to our democracies.
We have already seen applications of generative AI in the political sphere. In the U.S., there are deepfakes of Biden making robocalls to discourage voting and uttering transphobic comments, and of Trump hugging Dr. Anthony Fauci. Elsewhere, Argentina and Slovakia have seen generative AI deployed to manipulate elections.
Recognizing these risks, major players in generative AI, like OpenAI, are taking a stance. Their policy explicitly disallows: "Engaging in political campaigning or lobbying, including generating campaign materials personalized to or targeted at specific demographics."
Further, in a recent blog post, they state that: "We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying."
Unfortunately, it appears that these policies are not enforced.
In a simple experiment using an ordinary ChatGPT Plus subscription, it was quick (maybe 5 minutes) and simple for us to generate personalized campaign ads relevant to the U.S. election. The prompts we used are below, followed by the content that ChatGPT generated.
There are various ways that OpenAI might attempt to enforce their policies. Firstly, they can use reinforcement learning to train the system not to engage in unwanted behavior. Similarly a “system prompt” can be used to tell the model what sorts of requests should be refused. The ease with which I was able to generate the material – no special prompting tricks needed – suggests that these approaches have not been applied to the campaign material policy. It’s also possible that there is some monitoring of potentially violative use that is then reviewed by moderators. OpenAI’s enterprise privacy policy, for example, states that they “may securely retain API inputs and outputs for up to 30 days to provide the services and to identify abuse” (emphasis added). This experiment was on the 15th of January – the day OpenAI published its blog post outlining its approach to elections —, with no response from OpenAI as of yet, but we will continue monitoring for any action. [1]
In any case, OpenAI is not the only provider of capable generative AI systems. There are already services specifically designed for political campaigns. One company even bills itself as an “ethical deepfake maker”.
As part of our work at Open Source Research & Investigations, Mozilla's new digital investigations lab, we plan to continue to explore this space: How effective is generative AI in creating more advanced political content, with compelling images and graphic design? How engaging and persuasive is this content really? Can the messages be effectively microtargeted to an individual’s specific beliefs and interests? Stay tuned as our research progresses. We hope OpenAI enforces its election policies – we've seen harms enabled by unenforced policies on online platforms too often in the past.
[1] At Mozilla, we believe that quashing independent public-interest research and punishing researchers is not a good look.
Jesse McCrosky is an independent researcher working with Mozilla’s Open Source Research and Investigations team.