The truth is, I was disabled long before I realized it. I’ve been taking life saving medication since before I could put on my own shoes, and I spent so much time in the emergency department of my local hospital as a kid — alternately struggling to breathe and cradling a sprained joint — that I might as well have taped my posters of R&B singers to the walls.
It wasn’t until I had a few silver hairs that I came to understand that the chronic health conditions I’d lived with my whole life — and the ones I’ve developed over time — constituted a disability. That is, they constitute “a physical or mental impairment that substantially limits one or more major life activity,” as defined by the Americans with Disabilities Act, which is the law of the land in the United States where I live. Connecting those dots changed everything, from the way that I see the world to the way that I see myself. And because I grew up in a service-oriented home, it was only natural to expand my advocacy wheelhouse to include working on with the disabled community, with the Disability Justice (DJ) framework created by Sins Invalid as anchor.
As outlined by the Disability Activist Collective, DJ asserts that for the disabled to fully be part of society, we must have the freedom to be our whole selves (that is, all of our identities together) at all times. This goes beyond rights — the bare minimum that people in power are required to honor — and centers the things that cannot be given or taken away: our identities, our values, our culture. Our community. At its core, DJ is a cross-disability framework that values access, self-determination, and an expectation of difference in disability, identity, and culture, which means all are welcome, regardless of the type of disability.
We are not a small group, but we are often an invisible one. The World Health Organization (WHO) estimates that about 1.3 billion people live with a “significant” disability, accounting for 16 percent of the total population. But if you look around the place you likely spend most of your day, you would never know it; a Harvard Business Review report found that 76 percent of employees with disabilities have not disclosed their experience at work.
That’s not surprising. While it took me a bit to understand how much my health impacted me, one thing that didn’t escape me was the contempt much of the world has for the disabled. In truth, it’s frequently not our conditions that make parts of life tough — it’s the lack of community care that does it. Whether posting pictures of innovations created to assist the disabled on social media for people to deride it as a useless product for lazy people, or the ways we’re depicted in pop culture, this world isn’t exactly welcoming to disabled folks.
While AI could be used to better meet the needs of the disabled (which, by extension, would also meet the needs of the abled), there are currently many instances where it actively works against us. AI systems are only as unbiased as their training data, and training data is only as unbiased as the humans who create and curate it, which is to say, not very. In 2023, researchers published “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” which explored the bias embedded in several natural language processing (NLP) algorithms and models. The result: every single public model they tested “exhibited significant bias against disability,” classifying sentences as negative and toxic simply because they contained reference to disability, ignoring context and the actual lived experiences of people.
And then there’s employment. It’s estimated that about 70 percent of companies around the world use AI-powered software to make hiring decisions and track employee productivity. The tools work by identifying and replicating patterns around who was previously hired, perpetuating the bias embedded in the system. For example, while a human recruiter can presumably interview someone who has been out of the workforce for the last year to focus on their health and understand that it doesn't impact their ability to do a job, the algorithm might dismiss them immediately for the gap in their work history. Nuance, you see, is nowhere to be found.
Our health is also on the line. Last year, journalists found that Medicare Advantage insurance plans use AI to determine what care it will cover for its 31 million elderly subscribers in the United States. The massive problem: the algorithms are a black box that can’t be peered into; it’s nearly impossible for patients to fight when they don’t know why they were denied coverage in the first place.
The most terrifying example I’ve seen is the Allegheny County (Pennsylvania) Department of Human Services’ AI system, which the United States Department of Justice is currently investigating because residents allege that it incorrectly flags disabled parents — like me — of being neglectful, removing their children from their homes with no actual evidence of neglect.
It’s tempting to think that we’ve gone too far down the road to double back. But that stance is shortsighted and will not move us closer to equity. There’s a rallying cry in the DJ community: “Nothing about us, without us.” If we are at the table when AI systems are created and deployed, we can help account for the needs of all.
I sat down with Meier Galblum Haigh (they/them), to talk all things AI and disability. Galblum Haigh is founding executive director at Disability Culture Lab (DCL), a nonprofit media and narrative lab dedicated to shifting the narrative on disability from fear and pity to solidarity and liberation, and moving the needle on ableism from indifference to action (I serve as co-chair of the board). Galblum Haigh specializes in narrative change work for nonprofits, and their wins include getting now Secretary Deb Haaland elected to the United States Congress, launching She the People, supporting the Women’s March and Black Lives Matter movements, and passing legislation to protect workers and their families.
Here, Galblum Haigh and I talk about intersectionality, ways that AI can support the disabled, and how we can win the race to make AI more inclusive.
Meier Galblum Haigh. Photo credit: Kea Dupree-Alfred.
Rankin: Why is it important to you to dismantle ableism in all spaces, including AI?
Galblum Haigh: As Maya Angelou taught us, none of us are free until we are all free. And Audre Lorde taught us that none of us live single issue lives.
Ableism is sinister because every system of oppression is disabling. Capitalism, racism, transphobia, patriarchy, colonialism, homophobia — all disabling. After we are disabled by corporations with unsafe conditions, racist doctors and cops, abusive partners, and unwanted pregnancy, we become even more vulnerable, and we blame ourselves because ableism tells us we have less value as disabled people.
Without careful regulation, AI will make ableism and oppression worse. It will encourage employers not to hire disabled workers, Black workers, trans workers, etc. It will (incorrectly) tell us autistic people are all white men, because it will amplify stereotypes that it is fed via its training data. Simply put: if folks with power “teach” AI to replicate their power, it will. It’s very important that we tune in right now and ensure that AI doesn’t further supercharge ableism, racism, colonialism, and transphobia.
Rankin: Why is collaboration important for effectively tackling issues at the intersection of AI and disability justice?
Galblum Haigh: AI is shifting the world so quickly. I know folks who’ve started AI businesses, advising nonprofits on how AI could help us move to a 32-hour work week. That would be huge for many disabled people. But I also know folks who’ve been laid off, or whose workplaces are reducing workforces instead, because AI means they can increase corporate profits by paying fewer salaries. That’s what AI will come down to — profits or workers. Will we democratize the gains or will just a few folks benefit?
Wealthy folks are already making a lot of money from AI, laying off people along the way. They don’t have our best interests at heart, and they are moving fast. If we don’t organize, quickly, they will win. It will take a lot of collaboration, because we are at a moment of huge transformation and a lot of money is on the line. That means corporations will fight with everything they have to bulldoze us. We can’t back down.
Rankin: What are some ways AI can support disabled communities?
Galblum Haigh: First, I think AI has the potential to allow a lot of organizations to more easily complete repetitive tasks and then offer four-day or 32-hour work weeks as standard practice, which could make work more accessible to many disabled people and caregivers (and disabled caregivers).
Second, a lot of disabled people face discrimination in the doctor's office, and I hear from many that the medical side of AI is very intriguing. We’ll have to see how that develops of course, but the ability to investigate potential medications and diagnoses, to better understand what specialists to pursue for your symptoms — that’s huge.
Third, I think that like many folks, disabled people will also pursue the exciting opportunities AI offers around task efficiency and work productivity. In the United States, many disabled people pay a massive time tax, including things like completing administrative paperwork, just to exist. AI could offer ways to cut hours from some basic tasks, which could be transformative. I’m already hearing about folks using it, and look forward to the day when it might be reliable for things like legal support and research.
Rankin: If you had a magic wand, what is the first thing you would use it to fix, as it relates to AI and disability justice?
Galblum Haigh: My big magical fix would be putting the folks most impacted by AI in charge of regulating and visioning it. At this moment, the best way we can achieve that is by organizing, as I don’t think governments or Big Tech companies will just hand us their power!
Rankin: It looks like we’re gonna have to take it!
This post is part of a series that explores how AI impacts communities in partnership with people who appear in our AI Intersections Database (AIIDB). The AIIDB maps the spaces where social justice areas collide with AI impacts, and catalogs the people and organizations working at those intersections. Visit to learn more about the intersection of AI and Disability Justice.