Hero Image

Investment Impact Evaluation

Oct. 4, 2021

Written by Ayana Byrd & Kenrya Rankin

Back to Impact Narrative


In 2016, there was an expressed organizational desire to focus on internet health concerns and, says a staffer, “shift from being very funder driven to being an equal partner with our outside funders in the design of the work.” That meant using first the Movement Building strategy (2016) and then the Trustworthy AI Theory of Change (2019) to guide the selection of partners that are aligned with the Foundation’s strategy, rather than being driven by partner agendas. Note that this evaluation covers the beginning of 2016 through mid-2020, so this shift in F&A investment strategy happened in stages during this evaluation period.

An analysis of the fellowships and awards funded (excluding projects connected to sponsorship funding) reveals a positive trend in investment in work that advances that mission in some way (Table 1). In 2019, 35% of funds ($3,448,954) went to support work that advances the AI impact goal. In 2020, that percentage jumped to 63% ($4,482,477) and in 2021, it was 66% ($1,640,120). (Note that 2021 falls outside the evaluation period; data is shared for context.)

A look at issue areas associated with the work of funded fellows and awards by year since 2015—the year before Mozilla Foundation took up internet health as its impact goal—shows a shift in priorities (Table 2). (Note that the values below don’t match in Table 1, as just one issue area tag is selected for each program’s internet health focus. So while the work might touch on many issue areas, these are the primary ones.) Pre-2016, projects with a primary internet health issue area of digital inclusion represented the bulk (Table 3 - 42%) of investment with open innovation on its heels at 41%. But 2016 marks the start of a dramatic (if not consistent) shifting of funds away from the digital inclusion space, with it representing just 17% of the total investment by 2020. Meanwhile, investment in work with trustworthy AI as the primary internet health issue area went from 0% in 2015 to 21% in 2020—the highest relative investment to date.

F&A began collecting data on movement intersections in 2020 (Table 4). A look at primary and secondary movement area intersections for all F&A projects funded in 2020 and 2021 falls outside the scope of this evaluation, but illustrates which movements garnered the greatest amount of support in recent years. For 2020, open science (17%), open source (16%) and human rights (14%) formed the top three. In 2021, that shifted to education (35%), ethics (30%) and open source (15%).

Investment 1.jpg



Table 1. Fellowships and Awards Investments by Connection to Trustworthy AI Impact Goal (beginning 2019)

*2021 falls outside the evaluation period; data is shared for context.




Table 2. Fellowships and Awards Investments by Connection to Internet Health Issue Areas

Note: Excludes Mozilla Open Source Support Awards (MOSS), which funds open source technologists working to broaden access, increase security and empower internet users. Where value is blank, no funding is classified for that issue area during that year. 2021 falls outside the evaluation period; data is shared for context.




Table 3. Percentage of Fellowships and Awards Investments by Internet Health Issue Area

Table 3.png

Note: Excludes MOSS. Where value is blank, no funding is classified for that issue area during that year. 2021 falls outside the evaluation period; data is shared for context.




Table 4. Fellowships and Awards Investments by Movement Area Intersection (beginning 2020)

*2021 falls outside the evaluation period; data is shared for context.
Investment 2.jpg



Advancing the Overall Theory of Change

These data provide insight to a central question of this section: Does the Fellowships and Awards program advance or hinder Mozilla Foundation’s overall Theory of Change? The answer is that, when it comes to what fellow and awardee projects are being funded, it does in fact advance the impact goal.

“The impact goal has made a huge difference just in terms of giving us more of a horizon to pin things to. We have more predictability about what the strategy is going to be and what we’re going to be interested in at six months or a year from now, which we didn’t have in the past.”


But it’s too soon to tell exactly how it’s advancing that goal—more data is needed. And time. The Foundation adopted the impact goal in 2019, which is three years into the time period covered in this evaluation. And the length of fellowships means it may take months or even years to see noticeable and measurable program impacts.

However, it is clear that Mozilla has the opportunity to better align the investment strategy and timeline with program end goals. The stakeholders interviewed for this report made it clear that this alignment rests on being more intentional and starting with the end in mind. Asking not just “Who wants funding?” or even “What are we funding?” but starting from “What things will change as a result of this work?” and “What will this project contribute to the world?”

"We have a theory of change that has short-term, medium-term outcomes. So we can look at that and say the short-term outcomes we want to see in the world are X, Y and Z. What do we need to fund to get to that?”

”The Theory of Change can help us identify the right people who are going to help us push those opportunities forward.”


Just under two-thirds (63%) of all F&A investments in 2020 supported work that advances the AI impact goal. If the ultimate target is 100% alignment, there are miles to go before that is reality. If not, the remaining 34% of funded people and projects could benefit from more cohesion around additional, well-articulated, goals.

“If we're saying that there’s a percentage that's outside of the impact goal that we’re going to be investing in, I think that also needs to have a strategy and that also needs to be tied to strategy the same way that the AI work is.”


For projects whose goal aligns with stated goals—and for those that don’t—it’s difficult to maximize impact without measuring it both qualitatively and quantitatively. Using a standardized final report for all fellowships and awards that poses questions that more directly evaluate impact would go a long way.

”If we required every project at the end of their final report to say how many users that project had had, or how many people had tested it, or whatever so that in six years we could say 100,000 people use projects Mozilla funded, that would be a win.”


But Mozilla faces very specific challenges when it comes to collecting data that covers long periods of time and doesn’t run afoul of its commitment to privacy. So it will be crucial to craft a measurement and evaluation framework that not only meets collection needs, but protects the people who are directly impacted by the programs without privileging certain kinds of data extraction. Staffers say that during the evaluation period, the process of collecting data has been “very all or nothing.” The goal should be to land in the useful in-between.

“We’ve struggled with impact measurement, particularly since the biggest impact often takes place after the fellowship/award period. We have also found that many funding recipients have a difficult time reporting on the number of people engaged in a thing just as that thing is being released into the world. We don’t want to collect data just for the sake of it—that’s not who we are. But we should focus on developing an evaluation framework that can be used over time and allows us—and others—to more fully understand immediate-, medium- and longer-term impacts.”


There are some programs—including Mozilla Open Source Support (MOSS)—that already collect this info in a way that works at the program-level. There is an opportunity to mine the surfaced wins of that process to help create a new F&A framework.

”We thought about how we measure the growth of a thing or the shrinkage of the thing over time to understand, ‘Did the funding have any impact on the overall scope of the thing?’ We ask questions about what infrastructure did you build, we ask questions about what maintenance did you do, what security holes did you plug, what kind of invisible work happened as a result of this funding that we might not have seen otherwise, that’s not just new features, or building a new thing, or launching a new thing? So I think we've been fairly thoughtful about putting together some attempts to measure that stuff. And it’s not super tested, so I think the next step would be rolling it out to more programs and testing and seeing if we get back what we want.”