Goal: The people building AI increasingly use trustworthy AI guidelines and technologies in their work.
Every era of computing tends to have norms that shape what is thought possible. We need to more clearly define what those norms are for AI, working closely with the people who are building AI, such as developers, designers, engineers, project managers, and data scientists.
At the core of this work will be efforts to ensure that people building AI are able to ask questions about responsibility and ethics at every stage in the AI research, product development, and deployment pipeline. We will need to support them on many levels — providing clear guidance, offering education and professional development, diversifying the workforce, and creating the right economic incentives. Importantly, all of this will need to get figured out iteratively and grow over time.
With this aim in mind, we believe that we should pursue the following short-term outcomes:
1.1 Best practices emerge in key areas of trustworthy AI, driving changes to industry norms.
Dozens of guidelines for “ethical AI” have been published in recent years, but they often focus on abstract, broad principles without clear steps for action. When they do point toward action, their ideas for operationalizing the principles vary widely across sectors. More work is needed to understand where these gaps appear.
A number of public and private sector organizations have published their own set of ethical principles to guide how AI systems are built and deployed. Prominent examples of these frameworks have been published by the EU’s High-Level Expert Group, the Partnership on AI, the Organization for Economic Co-operation and Development (OECD), Google, SAP, the Association of Computing Machinery (ACM), Access Now, and Amnesty International. As this list suggests, these guidelines have come from all sectors of society — industry, government, and civil society.
Landscape scans of these frameworks published by Harvard’s Berkman Klein Center[1] and Nature Machine Intelligence[2] show that across different sectors, there is global convergence around common principles such as transparency, fairness, and human well-being. According to one analysis of 84 frameworks, the most common principles included were transparency (86.9% of frameworks), justice and fairness (81.0%), a duty not to commit harm (71.4%), responsibility (71.4%), privacy (56.0%), and human well-being (48.8%).[2]
Most guidelines generally agree on core principles, but there are major differences across sectors about what they mean and how they should be implemented. For instance, in their assessment of transparency, nonprofits and data controllers tend to propose audits and oversight, whereas industry players propose technical solutions to transparency.[2] These fault lines show how the same sets of principles might result in different — and sometimes oppositional — actions when they are put into practice.
As of now, few players have prescribed concrete steps that translate their guidelines into action on the ground. This is because many of the challenges raised by AI are domain-specific and will require a coordination of work across sectors. How do developers interpret the guidelines in everyday technology and product development? What is the best way to ensure models are fair and unbiased? What steps must companies take to safeguard against harm? How do companies come to the difficult decision that an AI system should not be deployed at all? While we certainly need clear frameworks or guidelines to define what “good” looks like, we also need a concrete plan for putting those principles into action.
Our theory of change is meant to be a small step toward this action, a suggestion of what to do at a systems-wide level based on the challenges we face in consumer tech. Over the coming years, we want to work with people to put broad principles into action through AI development checklists, education programs, and software tooling. Putting these principles into action is key to shifting the norms of how AI is developed.
1.2 Engineers, product managers, and designers with trustworthy AI training and experience are in high demand across industry.
Engineers, product managers, designers, and other members of the cross-functional teams building AI wield a great degree of decision-making power. There are many initiatives underway aimed at helping developers think critically about their work, such as Mozilla’s Responsible Computer Science Challenge, but more work needs to be done to ensure the people who are building AI responsibly are in high demand from companies.
One way to get there is to start with students before they have joined the workforce. A controversial New York Times op-ed argues that academics have been “asleep at the wheel” when it comes to teaching ethics in tech. Indeed, the traditional approach to ethics education in computer science is far removed from engineers’ day-to-day experience. Students say they don’t connect with case studies and don’t always know how it applies to their work. Further, many initiatives focus only on CS/Eng students when they should broaden to include other disciplines like information science, design, and management education.
Some universities have moved toward making ethical computing courses required for CS/Eng students, and also making these courses more practical. In a recent landscape analysis of 115 university courses in tech ethics, researchers conclude that while CS as a discipline has been slow to adopt ethical principles, it has made a great deal of progress in recent years. They recommend that students hear the message that "code is power" when they first start learning how to code and that this message should be reinforced throughout coursework.[3] More people are calling for major overhauls of how such degrees are taught altogether. At MIT, the New Engineering Education Transformation (NEET) group has been testing an alternative approach that teaches ethics as a set of skills embedded in what engineers already know. Such classes train the next generation of AI developers to think not only about how they should design AI in future jobs, but also whether those systems should be built at all.
While promising, many of these initiatives are directed only at CS/Eng students and we have yet to see parallel efforts to integrate ethics into management or design education. Furthermore, these initiatives tend to focus on university coursework and don’t include other forms of training, including online education or coding bootcamps. In order to fundamentally change how cross-functional teams build AI, we will need to think more broadly about all the different occupational roles that shape the development pipeline and ensure that they are empowered to think critically about their work. We will also need to make sure that people who follow non-traditional pathways into AI development are trained to ask critical questions about tech.
Companies are starting to recognize that in order to recruit and retain top talent, they will need to meet the rising demand for ethical tech, but we still have a long way to go. Skilled engineering graduates are already highly sought after and have more pull over potential employers than in many other industries. We aim to get to a point where tech companies are under pressure to demonstrate that they are building technology responsibly in order to attract top talent across disciplines – design, engineering, and management.
At the same time, there needs to be a major shift in company culture so that employees who are advocating for more responsible AI practices feel supported and empowered. Evidence suggests that the actions of internal advocates won’t have impact unless their work is aligned with organizational practices.[4] We think that these two components — critically minded engineers and organizational change — need to be in place in order to usher in system-wide changes.
1.3 Diverse stakeholders — including communities and people historically shut out of tech — are involved in the design of AI.
It’s not just the mindset of decision-makers that matters, but who is making those decisions. Tech has made strides in recent years to bring in new and diverse voices into product development, but we are still far from where we need to be.
The diversity crisis in the tech industry — and its direct link to problems with bias in AI — has been well documented. It has been reported that women make up only 10 percent of people working on “machine intelligence” at Google and 15 percent of Facebook’s AI research group.[5] “It is not just that AI systems need to be fixed when they misrecognize faces or amplify stereotypes,” says a recent AI Now report. “It is that they can perpetuate existing forms of structural inequality even when working as intended.” It is crucial that the teams building AI are themselves diverse and represent a range of communities and perspectives.
Creating more diversity within developer communities, and specifically in who gets AI jobs and training, has a huge impact on how technology is built. A 2014 NCWIT report found that gender-diverse management teams performed better in terms of overall productivity and team dynamics. Further, the study found that companies that dominated the market did so by encouraging innovation that drew from a diverse knowledge base. Diverse teams are more likely to drive innovation and change.
Engineering teams should strive to reflect the diversity of the people who use the technology, along racial, gender, geographic, socioeconomic, and accessibility lines. Those team members will be better attuned to the ways bias and discrimination manifest and they would also have a higher level of cultural context for how technologies might be received or interpreted in their community, region, or language.
Critically, changing the diversity of the people building AI will require making drastic changes to company culture. In its analysis of the diversity crisis in AI, AI Now concluded that a worker-driven movement aimed at addressing inequities holds the most promise for pushing for real change. Companies must foster an open culture in which the status quo can be questioned or challenged without fears of retaliation.
Companies will need to develop processes for consulting with diverse communities throughout the AI product life cycle, especially when the technology may have an adverse impact on a historically marginalized community or region. This will require teams to adopt a more participatory, open approach to how it does its work, using frameworks and tools such as participatory design, co-design, or design justice.[6] It may also require companies to adopt stricter rules to safeguard against harm. Companies may mandate that particular features should be thoroughly tested with diverse user groups across geographic regions and languages before being deployed. Compliance frameworks exist for assessing risk and mitigating harm in AI, particularly when it comes to fairness and bias. But more work is required to ensure that companies not only adopt these narrow harm reduction processes, but proactively develop new tools and processes for designing and deploying AI.
1.4 There is increased investment in and procurement of trustworthy AI products, services and technologies.
Although there has been a rise in “impact investments” in socially responsible companies and startups, there is still a lot of work that needs to be done to ensure trustworthy AI products are getting the funding they need to become viable.
Impact investing — also known as socially responsible or ethical investing — is an investment approach that focuses on organizations and companies that are having a positive social impact on the world. It’s a strategy that seeks to bring about environmental or societal change through investment. Impact investing represents a huge slice of investments in the US: The Forum for Sustainable and Responsible Investment estimates that in 2018, $12 trillion was invested in socially responsible investment funds, which represents 25% of all professionally managed assets in the US. Evidence shows that young investors overwhelmingly want to invest in socially responsible companies. Similarly, the B-corp movement in the UK legally requires B-corps to “consider the impact of their decisions on their workers, customers, suppliers, community, and the environment.”
In tech, this wave of impact investing is increasingly shaping what kinds of companies get funded. We are already seeing tech investors pay more attention to data privacy, a cornerstone of developing AI responsibly. Nearly $10 billion was invested in privacy and security companies in 2019, with the largest rounds of funding going to startups like Rubrik, 1Password, and OneTrust.
We are also seeing larger tech companies pay more attention to privacy in their acquisition strategy, which is a step towards more trustworthy AI. In 2018, Apple acquired the privacy-conscious AI startup Silk Labs, which is building on-device machine learning software, and Cisco acquired security startup Duo. In 2019, Microsoft acquired BlueTalon, a data privacy and governance service. Privacy is rapidly becoming a key part of a target company’s risk profile in any acquisition.
There is a clear opportunity now for such “impact investors” who care about building tech responsibly to shape the AI product landscape. Most recently, 1,356 AI startups raised over $18.5 billion in funding in 2019, a new annual high for the AI sector. VC funders and angel investors themselves attract huge amounts of capital if they focus their portfolios on AI. Impact investors should build on the momentum that has been growing in recent years around responsible tech and continue to fund AI startups that are building tech ethically.
Similarly, big tech companies looking to acquire AI have a huge amount of power. By acquiring socially responsible startups and technologies, these companies can send signals that building AI responsibly is not just a plus, but not doing so could be a major liability.