By Jon Lloyd and Brandi Geurkink | July 19, 2019 | Advocacy
“The integrity of the upcoming European Elections and our democracies is at risk. We can’t carry on like this. The case for regulation is overwhelming.” That was a tweet sent by an MEP and leader of a key party in European Parliament just three months before the May elections.
In 146 characters, Guy Verhofstadt summarised the political appetite for the regulation of big tech — while ironically taking to Twitter to raise an alarm bell about the social media platforms’ impact on our democractic processes.
In anticipation of the 2019 European Parliamentary elections, major technology companies including Facebook, Google and Twitter joined Mozilla in signing up to a voluntary Code of Practice on Disinformation. The self-regulatory Code was the first-ever attempt at getting major technology companies to acknowledge the role that their technologies play in spreading misinformation online, and commit to concrete fixes (although it’s not without its critics).
Throughout 2018, Mozilla played a key role in shaping the contents of the Code of Practice and pushing other technology companies to do their part. But we knew that wouldn’t be enough. That’s why in the months leading up to the elections, as deadlines for promised “transparency tools” and product fixes crept closer, we ran a campaign to mobilise consumers and regulators to hold these big tech companies to account for what they promised.
We are telling you this story about how we won (and the times that we failed along the way!) with hopes that it will inspire greater demands for transparency and accountability of big tech.
This campaign reminded us of some of the key things that make any campaigns successful: playing to our own strengths; building connections beyond our traditional issue allies; working directly with those affected by a problem; and sticking firm to the mantra ‘actions speak louder than words.’
How it began
One of the Code’s core promises was that the big advertising platforms (Google, Facebook, and Twitter) would, for the first time, make all political ads running in the European Union accessible through a database called an API (application programming interface). These APIs would enable independent researchers to analyse the content, targeting, spend, reach — and ultimately the impact — of online political advertising, research which greatly helps illustrate who is using these platforms to influence people’s political views, what’s being said, and whether ‘bad actors’ or Facebook’s own algorithm are determining who sees the ads. The deadline for releasing these APIs (selected by the companies themselves in their ‘roadmaps’ for compliance with the Code of Practice) was in Spring of 2019.
Then in late January, two things happened within just days of each other: First, while Facebook should have been busy developing their ad API, they were actually blocking access to existing tools that allowed users and journalists like ProPublica (and other organisations, including WhoTargetsMe and Mozilla ourselves) to see how they were being targeted by political actors while using Facebook. Then, a few days later, Mark Zuckerberg published a… curious op-ed espousing his love for transparency. That set us into action.
What we did
First, we responded immediately. Our then-COO Denelle Dixon penned an open letter to the European Commission expressing concerns about the lack of publicly available information on political advertising on Facebook. And a week later, Mozilla, 40 civil society partners and 11,589 Mozilla community members signed an open letter to Facebook, published in Politico EU, calling on Facebook to fulfil its commitments in the code.
Facebook’s Head of Product Rob Leathern responded immediately on Twitter, assuring us that Facebook would fulfil its commitments in the Code. We viewed Facebook’s reply with cautious optimism.
After releasing our campaign targeting Facebook, we set out to collaborate with other organizations engaged in fighting misinformation around the election. Mozilla organised a cohort of 36 organisations fighting digital misinformation — made up of researchers, campaigners, technologists and other experts — in Paris.
We needed to find out how to build on the energy that we had created with our open letter, and figure out what other approaches activists across Europe were taking to combat disinformation before the upcoming elections. What we kept hearing time and again during the 2-day convening was how we needed to empower researchers who were trying to study how disinformation was targeted on digital platforms.
Researchers at the Paris convening and beyond kept stressing that the APIs that Facebook and Google had promised were likely to be insufficient — but their warnings weren’t fully reflected by the European Commission in their monthly “progress reports” assessing the platforms’ action.
So to build on the pressure we had already put on Facebook, we worked with a group of ten researchers to establish an ideal for what an effective ad API would look like. We timed these guidelines to publish them immediately before Facebook and Google were due to launch their ad APIs to the world, giving the European Commission a benchmark to judge their efforts.
(As it stands, the API that Google released passed as assessed by our criteria; Facebook’s failed. Read more about our researcher’s efforts here.)
What worked well
We approached this from a position of leaning into our expertise as a “technical translator”.
We tried to add unique value to the anti-disinformation space, and step back to let other organisations lead where they had expertise. What we found here was that this boosted the impact of independent researchers with decision makers at the European Commission, showing the power of researchers to develop independent solutions that can keep governments and companies in check.
And we weren’t just criticising the ad APIs — by having the guidelines established by a group of researchers, we were able to point out what the best case would look like. This enabled the Commission to be able to do a direct side-by-side comparison (also based on our own assessment). As a result of our efforts, transparency of political ads is also now at the top of the Commission’s agenda.
We’ve also worked closely with journalists to ensure they truly understand this issue. Our goal was not just to get an API that researchers would find useful, but to also increase the capabilities of tech journalists to examine and develop their own opinions about the functionality of ad transparency tools.
The open letter double whammy
Our opening salvo was the one-two punch of open letters, firstly to the Commission and then to Facebook directly.
Firstly, our letter to the Commission was crucial in being able to send to civil society partners alongside our request to sign the second open letter to Facebook. It gave us credibility — it showed that we had the technical know-how and weren’t afraid to pull punches from our co-signatories when it mattered the most.
Secondly, we consciously brought on partners from outside the digital rights space. Disinformation affects many issues, from politics to climate change denial to anti-vax to bogus cancer cures. As a result, we had environmental organisations, human rights organisations, civil liberties organisations and more join us — in total, 41 organisations signed on to our open letter in just three days.
Thirdly, our open letter to Facebook highlighted the company’s hypocrisy. We turned this around on Facebook to make our case for support stronger, and timely.
And lastly, we worked with our localisation team to get the open letter translated so we could send to our email list at the same time we send the letter in English. Not only did this mean that tens of thousands more people in the Mozilla community signed the open letter alongside our civil society partners, but also our post-sign donate ask raised $18,000 more. And because Facebook’s Rob Leathern responded the day our email went out, it was an opportunity we would have missed if we had not taken the extra day to localise our content.
Taking a step back again was crucial in ensuring we could map our space and see where we could add value. We were conscious of the expertise of our partner organisations and didn’t want to duplicate strategies — it was important for us to find the spaces where we could add value.
What we found was technical capacity was insufficient, transparency was needed, and researchers weren’t united. This led to our golden opportunity.
Working directly with researchers
When we write this now, it seems obvious — but working directly with the affected community to find out exactly what they needed was crucial. It united researchers behind a common ask (as of publication, 71 researchers have now signed on to the guidelines the initial cohort helped co-create).
And as we mentioned above, it gave the European Commission something to use as a baseline guide to score the companies against. Technical issues are difficult to understand and easy to spin if you’re Big Tech. They’re much harder to spin once you know what ‘good’ looks like.
Maintaining community engagement
It’s easy to get lost in the technical details here. But at its core, it comes down to fundamental rights of knowing who’s targeting you and why.
When we published the researcher guidelines, we wanted to know what our community — Mozilla’s supporters who had signed up to our email list and followed us on twitter — thought made sense to do next: whether we should focus our limited resources on getting more signatories to the researcher guidelines, or continuing to pressure Facebook and Google (thanks to an incredibly engaged researcher community, we were able to do both!). We did this survey as a one-question email, with response among the highest of our emails this year. Not only that, by optimising the post-survey user journey, we also raised $19,500 just from asking our community a question. It was an affirmation that what we were doing was the Right Thing by both the research community and our traditional activist base.
Playing to our strengths
Every aspect of our strategy and campaign played to the skill-sets of our team. On this campaign, we were lucky enough to have campaigners with backgrounds in organising and fundraising. We have a campaign researcher with the know-how to translate tech company promises into real-world impacts. We have a campaign strategist with a background in European Parliament who understood what would resonate with the European Commissioners and translate researcher wishes into a set of guidelines. We have a localisation specialist to help translate and make campaigns context-appropriate. We had dedicated comms support from a person who was able to pitch ideas to press and help amplify our campaign at its most crucial moments. We have a talented team of designers and developers to provide assets to get us online. And working with our Mozilla Corporation colleagues, we had policy, trust and security, engineering, and legal expertise. And we have a manager who truly has our backs — she helped us clear blockers and fast-track work when we needed it.
Our campaign was built around what we were best at with the resources we’re lucky enough to have available. We made decisions to actively drop threads of work where we didn’t think we’d be able to do work we’d be proud of.
What we’d do differently next time
Not letting publicity promises dictate strategy
Our biggest regret: We let Facebook and Google’s public promises determine our own next steps.
Their ad APIs were always just around the corner, just about to be released and/or improved. In hindsight, we should have published our ad API guidelines much earlier. As it stands, Google and Facebook had more than six months from the launch of the Code of Practice on Disinformation to launch their ad APIs — Facebook published theirs ‘in time’ but it left so much to be desired. Google launched its own so late it was functionally useless. Ultimately, we ran out of time to get the political ad transparency the platforms had promised before the election.
Internal strategic alignment
Not many people realise that Mozilla is actually two organisations (the Mozilla Corporation and the Mozilla Foundation). This separation of resources, strategies, and personnel is part of our unique strength as Mozilla, but it also makes strategic alignment and coordinated execution challenging at times. Sometimes it’s as simple as working out resources and scheduling, but at other times our institutional postures are enough different to create complexity. The Mozilla Foundation operates in a movement, and the Corporation in a market, after all. Finally settling on the ad API guidelines was a “happy accident” — a perfect confluence of work being undertaken by our own researcher and our own efforts to find what researchers needed.
Taking a more global approach
When approaching other organisations to sign our open letter to Facebook, we had been using the EU Code of Practice on Disinformation as our hook. Some of the criticism we received was why we were only focusing on Europe when several other countries, especially in the global south, also had elections on around the same time. It’s a valid point — the issues around political ad transparency are not unique to Europe. And although the Code of Practice was only ‘valid’ for the European Union, in practice if the companies were able to fulfil their obligations in Europe, they’d be able to fulfil them anywhere.
Our next steps
Supporting new partners
We’re now working with campaigners based in Latin America who are working on a similar campaign asking the platforms for political ad transparency in the lead up to several elections in the region this year. We’ll continue to support them using our expertise as a technical translator, and sharing our learnings on what worked (and what didn’t!)
Keeping the pressure on Facebook and Google
Even though the European Parliament elections are over, Facebook and Google must still adhere to the commitments they made in the Code. We’re committed to holding them accountable.
Taking it beyond the political
Ultimately, the definition of a political ad sat with each of the platforms. And whatever the definition, it was clear that ads would be missed. That’s why we’re looking at how broader approaches may be more effective in the long term.
Continuing to work with researchers
We’ve built a strong community of researchers, and we’re looking into other areas where we may be able to provide the expertise missing from different spaces. There are many areas within the tech platforms that are little understood by outside researchers, which makes it difficult or impossible for civil society to understand the impact of these tools on their communities and formulate effective calls for change when needed.