Hauptbild

4. Creating Regulations and Incentives

Goal: New and existing laws are used to make the AI ecosystem more trustworthy.

The development of new technologies is outpacing regulation in many countries. This means that new AI products and systems are being tested out on millions of people without effective government oversight or governance. At the same time, many lawmakers are eager to enact new regulations to limit the power of tech companies. But questions remain as to whether or not those laws are technically grounded and effectively address the problems at hand.

To improve the trustworthy AI landscape, we will need policymakers to adopt a clear, socially and technically grounded vision for regulating AI. We will also need lawmakers to ensure that baseline consumer and privacy protections serve as a cornerstone of any AI regulatory regime. Policymakers will need to enforce or update existing laws and enact new ones in order to meet the rising challenges.

One note: We have intentionally chosen to focus on Europe’s regulatory landscape as a model for what positive change might look like. This is because some of the most interesting developments in data protections and governance are happening in Europe, and we consider the EU approach to regulating AI to be the most mature and promising. Its lawmaking could serve as a model for other countries that produce AI technologies. However, questions remain as to whether the EU model can and should be the model for other countries around the world, especially since AI expertise is unevenly distributed globally. Do countries that are not producing AI but are still using it need different sets of rules? Do developing economies have the resources to enforce rules? Given the global focus of Mozilla’s work, we will need to address these questions as we continue to develop this body of work. We invite global partners to contribute by surfacing positive models and examples from their countries of what effective AI governance should look like.

With this framing in mind, we believe that we should pursue the following short-term outcomes:

4.1 Governments develop the vision, skills, and capacities needed to effectively regulate AI, relying on both new and existing laws

Skills and capabilities

Most lawmakers do not have the skills and resources they need to craft effective policy on AI or big tech in general. Adding to the problem, civil society organizations don’t always have the technical capacity to do informed research on AI, limiting their ability to advise governments on key issues. The result is that policy debates related to AI and data are typically dominated by experts and lobbyists from big technology companies.

Some policymakers are hiring technologists to inform and shape tech policy, but they have limited budgets and may prioritize hiring staffers with other expertise over those with technical expertise. A report commissioned by Ford Foundation and other members of the NetGain partnership on the flow of tech talent into public sector jobs finds that it is difficult to convince technologists to switch careers to join government offices because they cannot offer comparable salaries and benefits.

Without staff who have technical expertise and can navigate the nuances of tech policy, AI policymaking risks being tilted in favor of industry voices. Tech companies have a disproportionate amount of power over how policy is made in countries like the US, and many of them promote the narrative that all AI is “inscrutable.” Different types of algorithms offer different levels of transparency, and not every algorithm is a “black box.” However, tech companies often use “the cultural logic of the ‘complicated inscrutable’ technology...to justify the close involvement of the AI industry in policy-making and regulation.”[1] Industry players involved in policymaking are the same group pushing for more invasive data collection.

Policymakers are strengthening their capacity by working with more technologists. Some governments are developing AI-specific centers of expertise, such as the UK’s Office for Artificial Intelligence. Other governments have created departments like the US Digital Service that enlist technologists to support the development of civic technologies and tools. Technologists in digital service centers could help advise policymakers on key tech and AI issues. An emerging field of “public interest tech” — supported by nonprofits like Mozilla, New America, and Ford Foundation — has also enabled technologists to influence tech policy decisions through fellowships like TechCongress, which places tech experts in a one-year fellowship with members of the US Congress or Congressional Committees. These programs aim to bridge the knowledge gap by temporarily offering much-needed tech expertise to congressional offices.

There’s evidence that policymakers are listening to technologists from civil society. But nonprofits don’t always have the technical capacity and they are often up against tech lobbyists and experts representing the interests of big tech companies. In its work on political advertising and the 2019 EU elections, the EU Commission put pressure on platforms like Facebook, Google, and Twitter to be more transparent about political advertising, aided by the technical expertise of not for profits like Mozilla. When those companies failed to meet their commitments to the EU Code of Practice on Disinformation, Mozilla helped the EU Commission understand why and worked with researchers to make product recommendations. As policymakers get up to speed on these issues, we need to ensure that nonprofits with technical expertise are part of the conversation.

Policymakers know that they need tech expertise and are making strides by working with technologists. But there is still more that needs to be done. Areas to invest in the coming years include expanding cross-disciplinary university programs that combine public policy and tech, and growing the number of research institutions with a focus on AI. In addition, governments should continue to invest in the creation of technology centers of expertise that can be used across departments and ministries. Steps like these will help policymakers develop the skills and capacity they need to more effectively regulate AI.

Vision

Many governments are working toward building more effective regulatory regimes, starting with the first step of articulating that vision. While the growing momentum in this area is promising, many questions remain about whether emerging policy visions will both address the intertwined challenges like bias, privacy, and the centralization of control in AI and provide a practical approach that can be put into action.

At the highest level, governments and countries are working together to develop global governance frameworks for AI. In 2019, 42 countries took a critical step when they came together to endorse a global governance framework on AI, the OECD AI Principles. Subsequently, the G20 adopted a set of global AI Principles, largely based on the OECD framework. The G20 principles affirm that companies building AI must be fair and accountable, their decision-making should be transparent, and they must respect values like equality, privacy, diversity, and international labor rights. While frameworks like these are promising first steps, we are still far from a global consensus on what governance should look like in practice.

At the same time, countries are putting together their own governance frameworks to shape policymaking. In the EU, trailblazing privacy regulations like the GDPR have already transformed how companies work with user data to build AI, and member states will continue enforcing those laws. In a 2020 white paper, the European Commission lays out its vision for governing AI, recommending that companies be required to provide documentation that datasets are statistically fair and unbiased, documentation describing how the AI was developed and trained, and enable greater government oversight. The EU is focused on a risk-based approach to regulation, which would target “high risk” areas that have the greatest potential for harm, such as healthcare, immigration, or employment.

Compared to other countries’ frameworks, the EU’s vision for trustworthy AI is the most mature. It suggests that there's no one-size-fits-all approach to regulating AI, and that extra safeguards are needed for deploying and using AI in 'high risk' situations. However, ‘risk’ in this approach is defined as ‘risk to an individual’, which excludes a whole category of AI applications that pose major collective risks. While the EU covers these risks in separate pieces of legislation, other countries should contemplate imposing transparency, documentation, and auditability requirements on 'high risk' AI-applications that have an impact on our broader democratic processes and institutions.

In total, over 60 countries have articulated their own visions for AI. In the US, the White House announced the American AI Initiative, which focuses on driving technological innovation and standards to protect a competitive edge on AI. However, regulations in the US have largely failed to keep pace with innovation, with American companies capitalizing on self-regulation. In 2019, the Algorithmic Accountability Act was introduced, which aims to boost federal oversight of data privacy and AI. In the UK, the Lords Select Committee put together a 2017 report suggesting five overarching principles for an AI code that reinforce data rights, transparency, and social good. China has also published principles for governing “responsible AI” in 2019, with a focus on human well-being, fairness, inclusivity, and safety. Australia laid out eight AI Ethics Principles that are voluntary and aspirational, intended to complement any AI regulations. In 2020, Singapore launched an updated version of its Model AI Governance Framework, which lays out the government’s mature vision for using data and AI responsibly.

As governments develop the skills and capacity they need to catch up with current AI innovations, they will be able to prescribe more technically grounded and effective visions for governing AI. As they do this, it will be important to look holistically at AI, including its role in internet technologies that shape the lives of most of their citizens. It will also be important that their regulations incent companies that come up with trustworthy approaches to AI. Some governments are much further along in this process than others, however, and we still have yet to reach a global consensus on AI governance.

4.2 Progress toward trustworthy AI is made through wider enforcement of existing laws like the GDPR.

Policy tends to lag behind innovation, and the AI landscape is changing rapidly year to year. Given the scope of the risks and challenges posed by AI, designing a regulatory regime that can address all of these issues may feel daunting. The good news is that policymakers are not starting from scratch: Existing laws and regulations that protect data rights can be wielded in a meaningful way to address many of the challenges outlined in this paper.

The GDPR, which came into effect in 2018, is a prime example of an existing regulatory framework that can be used to address issues surrounding AI. For example, the GDPR has been used to pressure companies into taking data security seriously: Massive fines have been levied against British Airways and Marriott for their data breaches, although they have since been reduced to very low amounts on appeal. The GDPR has been used to tackle the surveillance economy and rampant data collection that powers AI. In 2019, Google was fined €50 million for not disclosing to its users how data is collected across its various services for the purpose of serving them personalized ads. The penalty was the largest GDPR fine to date. Growing enforcement of the GDPR in areas like these has a downstream effect of making the AI developed by these companies more privacy-friendly and trustworthy.

There are also sections of the GDPR that relate more directly to AI, but they have not yet been applied and tested. For instance, Article 22 of the GDPR, “Automated individual decision-making, including profiling,” says that decisions made without any human intervention cannot be used to make choices that could have a “significant impact” on an individual. This means that an algorithm can’t be used to automatically decide, say, whether someone is eligible to qualify for a loan. In addition, the GDPR does not explicitly say that citizens have a “right to explanation,” but according to Article 22 people do have a right to obtain “meaningful information about the logic involved” in an automated decision that could have legal or significant impact. This means if someone’s loan application is rejected by a bank’s software, the bank may be required to provide general information about the input data used by the algorithm, or the parameters set in the algorithm.

According to the European Data Protection Board’s interpretation of the law, the GDPR covers the creation of and use of most algorithms. GDPR provisions that may apply to AI include: the requirement that processing be fair, the principle of data minimization, and data protection impact assessments. Fair processing might require companies to “consider the likely impact of their use of AI on individuals and continuously reassess it.” However, it might be impossible for a company to identify AI bias or perform impact assessments if that AI system is not sufficiently transparent.

Privacy regulations aren’t the only laws that can be applied to the tech landscape in order to strengthen safe innovation in AI. Antitrust laws could be applied to help spur competition and innovation in AI. Currently the market for AI is less competitive and innovative because only a handful of tech companies dominate. Moreover, AI can accelerate the dominance of the few: Big tech companies have greater access to data, which allows them to develop better AI, which then allows them to collect even more data.

In the EU, authorities have not shied from imposing fines on big tech companies based on competition law. Google was fined €1.5 billion for antitrust violations in the online ad market in 2019. Authorities say Google was imposing unfair terms on companies that used its search bar on their websites in Europe. Recently, a renewed interest in antitrust laws among legal scholars and regulators alike has presented an opportunity to strengthen competition policy.

Privacy protection laws like the GDPR are being adopted around the world, with Kenya and California passing similar laws in 2019. At the same time, the countries that pass such laws often do not have independent and sufficiently resourced regulators to enforce them effectively. There is an opportunity to use these trends to drive a trustworthy AI agenda, but only if both government and civic actors take a proactive role. Organizations like the Digital Freedom Fund, a European impact litigation organization, or the ACLU in the US, could play a role in bringing forward relevant cases under data protection laws. Alternatively, data co-ops could form to collectively represent millions of people under a single umbrella, providing both a way to enforce data rights en masse. If we can make it happen, then aggressive, creative, and technically grounded enforcement of existing laws could be a way to move towards trustworthy AI.

4.3 Regulators have access to the data and expertise they need to scrutinize the trustworthiness of AI in consumer products and services.

As we’ve seen, privacy laws like the GDPR address many of the concerns people have around how companies are collecting and processing data. But such privacy regulations do not specifically describe how companies should make their AI more transparent and accountable to third parties, nor is such oversight mandated by law (yet). In order to mitigate potential harm, we will need to explore what kind of transparency should be required by regulators in order to audit AI systems.

One way to increase understanding of an AI system is through blunt transparency — sharing the algorithm’s source code. Complete transparency has a number of limitations, however, as it often ignores systems of power, runs the risk of obscuring itself further by overwhelming people with too much information, and can promote a false sense of knowledge.[2] Calls for transparency often fall short unless paired with clear explanation and documentation mandates, along with clear mechanisms ensuring that this information will be used to hold the system accountable by different stakeholders.

Some companies regularly audit their own AI systems to ensure accuracy and flag potential risks, but thus far self-regulation has largely failed to mitigate harm. Under pressure from regulators, companies are now starting to build AI systems in a way that makes them easier to audit by third parties, such as researchers or government agencies. According to the EU’s 2020 AI White Paper, transparency in this context could mean many things: from opening up the training data of an algorithm, to documentation of a system’s robustness or accuracy, to more detailed record-keeping on the training methods and normative decisions made to build the AI system.

Depending on the context, companies may be compelled to release information about a model’s training data. Such information may include how the data was obtained, a description of why a particular dataset was selected, proof that the data meets safety standards, is sufficiently broad and unbiased, and personal data is protected. Companies may also be compelled to release detailed documentation about how the AI was designed, programmed, and deployed. Such documentation could include records on the programming of the AI: what traits or values the model was optimizing for, or what the weights were for each parameter at the outset. Documentation may also include records on the training methodologies, processes, and techniques used to build, test, and validate the AI systems. It is important that documentation include explanations for why a dataset or method was selected — normative explanations are critical pieces of information regulators need to understand the AI development workflow.[3]

In some contexts, companies or platforms may be compelled to develop data archives or public APIs that researchers, journalists, and other watchdogs can use to study patterns of discrimination or harm. Previously in this paper, we talked about how platforms like Facebook, Twitter, and Google have developed open political ad libraries that provide detailed information about the advertisements appearing on its platform, a first step towards empowering third parties to audit the platforms. However, when Mozilla assessed Facebook’s Ad API ahead of the 2019 EU elections, researchers told us that the API did not allow them to download machine-readable data in bulk, nor was the data comprehensive and up-to-date. Such companies should provide clear, accurate, and meaningful information to researchers and governments about its use of AI, and should be held accountable by policymakers and third party auditors.

Much more work needs to be done to determine what effective transparency and oversight looks like for AI, and what kind of data different stakeholders will need for effective audits. Transparency is not an end in itself, but it is a crucial prerequisite for meaningful accountability of AI systems. Developers will need to build AI in a way that makes it easier to audit, and people and governments will need to put pressure on companies to provide the data required for audit. We want to see enhanced levels of transparency across the board for companies building AI: transparency in terms of detailed documentation, information about the source code and training data, normative explanations of how the system was built, and the release of data archives and libraries that help researchers study AI systems and governments hold them accountable. This is an area in which greater standardization and rulemaking is needed.

4.4 Governments develop programs to invest in and incent trustworthy AI.

As governments hone their vision of how to regulate AI, many recognize the need for policies and programs that boost investment in research and startups in this area. They are also looking for ways to use procurement guidelines to ensure governments use trustworthy AI and encourage the growth of responsible businesses. Investment and procurement both offer governments a way to proactively build up industry segments that reflect the values in their AI vision, a move that is just as important as regulating AI.

One way governments are investing in the trustworthy AI ecosystem is by developing an industrial policy that matches their policy goals and vision for AI. In 2018, the European Commission announced that it would boost its investment in AI to €1.5 billion by 2020 in order to keep pace with Asia and the US and the German government announced it had set aside €3 billion for AI R&D. More recently, new proposals from the Commission suggest that Europe may increase its investment in AI to over €20 billion and is seeking to create a single European market for data. In the US, where private investment in AI is already high, the White House issued an Executive Order encouraging AI investment but no clear plan. China’s government, on the other hand, is investing heavily in AI: The government’s VC fund is planning to invest more than $30 billion in AI within state-owned companies. One Chinese state is investing $5 billion in AI tech and another major city, Tianjin, is investing $16 billion in its local AI industry.

Another way governments support trustworthy AI is by developing a procurement strategy that matches their strategic vision for AI. So far, the software used by government agencies has not always demonstrated the level of transparency and accountability we might expect from any public use of technology. City governments and government agencies are not able to properly assess the AI-enabled systems they want to procure, which has led them to invest in or buy “AI snake oil.” Because there are no clear rules about public oversight of tech vendor contracts, government agencies may procure and use tech that could impact millions of people without ever needing to notify the public.

Some governments have taken steps to create guidelines for government agency procurement of AI-powered tech. In the UK, the government published a “Guide to using AI in the Public Sector” based on its Data Ethics Framework to enable public agencies to adopt AI systems in a way that benefits society. These procurement guidelines aim to empower government agencies to buy trustworthy AI by helping them evaluate suppliers and establish rules for transparency. Recommendations include developing a strategy for addressing the limitations of training data and focusing on accountability and transparency throughout procurement.

In the US, New York City established the Automated Decision Systems (ADS) Task Force in 2018 to set up a process for reviewing the use of algorithms by city agencies. AI Now Institute developed a practical framework for city procurement of AI technologies in the form of Algorithmic Impact Assessments that recommends that cities inform the public of any proposed procurement, conduct internal agency self-assessments to make sure the agency has the capacity to assess fairness and disparate impact, and give researchers and auditors meaningful access to the AI system once it’s deployed.

On a more global scale, the Cities Coalition for Digital Rights, a coalition of 39 cities in the EU and the US, are taking steps to ensure that cities use technology in an open and transparent way. In its declaration, the coalition affirms several broad principles including the transparency, accountability, and non-discrimination of algorithms. This means that the public “should have access to understandable and accurate information about the technological, algorithmic, and artificial intelligence systems that impact their lives,” and they should be able to “question and change unfair, biased or discriminatory systems.” In the future, this coalition may serve as a testing ground for enacting better procurement standards and rules.

As we’ve illustrated, governments are in the process of developing their own procurement guidelines for AI, but these guidelines have largely not been implemented yet. One way to operationalize these guidelines is for government agencies to adopt them directly into the terms and conditions of procurement contracts. For instance, such contracts might require any AI-powered software to meet a gold standard in terms of transparency, auditability, and fairness. They may also include rules for public notice and review of the technology. In this way, government agencies and cities can use their buying power to support trustworthy AI products.

This is an area in which governments are least developed and could be a major opportunity for growth. Governments should seek to align their visions and framework for trustworthy AI with their industrial investment and tech procurement policies, thus creating incentives for better technologies and companies to emerge to meet rising demand.


Fußnoten

  1. [1]

    Corinne Cath, “Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2133 (November 28, 2018): 20180080, https://doi.org/10.1098/rsta.2018.0080.

  2. [2]

    Mike Ananny and Kate Crawford, “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability,” New Media, Dec 13, 2016, https://doi.org/10.1177/1461444816676645.

  3. [3]

    Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, March 2, 2018), https://doi.org/10.2139/ssrn.3126971.

Scrollen Sie weiter zu
Conclusion