In a recent provocation, I proposed a speculative design approach to the challenges of transparency, evaluation, and human agency in generative AI adoption. A panel discussion on January 19th explored a possible future world where every AI agent must have a social license to operate. During the workshop part of the online event, 79 participants imagined scenarios for using a social license when an AI agent is introduced in patient-clinician interactions in healthcare.

I expand on this work in this blog post, starting with an overview of the state-of-the-art research on introducing generative AI in a healthcare setting. I provide perspectives from the public through the lens of workshop participants and experts I’ve interviewed, including clinicians piloting AI tools in their practice. Amid frictions, gaps, and assumptions about the complexity of generative AI adoption in critical domains, I hope these early findings can serve as trustworthy AI building blocks for interdisciplinary practitioners.

The current landscape

A wealth of recent research explores the use of generative AI in a clinical context. Applications range from helping clinicians with administrative tasks (such as writing clinical notes) to decision-aid (such as diagnosis). Existing risks in the clinical deployment of generative AI systems include privacy, security breaches, and the lack of coherence, transparency, and interpretability of model outcomes. For example, training data may not be verified for domain-specific accuracy, which leads to an issue of ‘garbage in, garbage out.’ Another complication arises when models developed in one country or jurisdiction are sold and used in an entirely different country or jurisdiction.

Scholars have discussed the ethical and legal implications of the adoption of generative AI in clinical care. For example, how could developers measure and mitigate the risk of automation bias, i.e. the overreliance on model outcomes given the busy schedule of clinicians? When AI contributes to patient injury, who will be held responsible? Challenges to the ability of patients or clinicians to report concerns or inaccuracies in a way that fully captures their context will further hinder understanding underlying root causes. Given these and other open questions, AI-based systems have had limited success transitioning from research studies to routine care, impeding AI's impact on healthcare.

AI-based systems have had limited success transitioning from research studies to routine care, impeding AI's impact on healthcare.

-

The speculative design exercise

These are hard questions. That’s where speculative design comes in: It helps expand what is possible and collectively negotiate what possibilities are preferred over others. To make the discussion specific, a design fiction scenario declared:

“The year is 2048. You’ve just been awakened from a hibernation chamber you entered four years ago. As part of your reentry into the world, you are given a medical checkup. As you walk into the clinician’s office, you realize they have an AI assistant they are talking to. The clinician greets you and hands you the AI assistant’s social license to operate.”

The social license is an AI-powered interface to ask questions about the clinician’s AI assistant. It is a form of independent ombudsman, a living socio-technical contract between the AI agent, other stakeholders, and society at large. Further, it provides a platform for ongoing third-party validation — i.e., an “AI *Privacy Not Included” or “AI Consumer Reports” - that informs you about potential risks concerning a specific AI agent.

Participants in the workshop then engaged with the questions:

  • What do you see on the social license? How does it ask for and register consent?
  • What would you ask the AI agent social license? What answers do you imagine it providing?
  • What policy, infrastructure, or invention would allow you to trust the AI agent’s social license?
Analysing the emerging themes within the insights generated during the January 19th workshop
Analysing the emerging themes within the insights generated during the January 19th workshop

The findings

Workshop participants included artists, educators, AI practitioners, researchers, policymakers, people working across product and innovation roles, and others. We gathered truly global perspectives from India, Bangkok, Namibia, Thailand, Austria, Canada, the UK, and the U.S. A thematic analysis of the discussions across different breakout groups showed three high-level findings about this possible future world:

  1. Building consent infrastructure has broader implications for AI efficiency and safety
  2. Design choices should center the human agency of patients and clinicians
  3. AI infrastructure needs to center trust in deploying different forms of checks and balances

In what follows, I summarise these themes and provide perspectives from what I’ve learned from clinicians who are piloting AI tools in their practice.

1. Building consent infrastructure has broader implications for AI efficiency and safety

In conducting interviews with experts, I talked to clinicians piloting an AI tool that transcribes their conversation with a patient and generates a note. Notes include essential information such as prescribed medication and specific directions from the healthcare provider. Meaningful informed consent was foundational for the clinicians I interviewed. They talked about critical requirements for consent including the use of clear language, assessing patients’ ability to understand the information, and giving time for them to ask questions as well as decline consent. As one of them shared, “You have the right to say no, this is your body." Another clinician shared that they worry AI will be adopted in a way that "consent is not an option for us or the patient … patients can't say they want to opt out of electronic health records (and if they do) it doesn't matter.“ One clinician discussed the motivation for the adoption of AI in their institution, showing how their goals differ from those of the hospital, i.e. saving money, improving efficiency, and liability, while “as a healthcare provider, I want to have a conversation about ethics and morality, which is where this AI conversation needs to be.”

The goal of the speculative design workshop was to directly engage with ethical and moral considerations. An AI agent that undergoes a certification process to obtain a social license nevertheless remains a black box to its users. The majority of workshop participants discussed that they would want a social license to provide disclosures about data, data provenance, and a genealogy of the AI agent — including a history of its development, ownership, modification, and audits. Meaningful consent then needs to account for a patient’s understanding of how using an AI agent might impact them and their clinician. Participants talked at length about the limitations of present-day consent mechanisms like cookies and boilerplate terms-of-service agreements. Meaningful consent to using an AI agent needs to go beyond individual interactions that can be perceived as transactional. It is helpful to recognize such interactions as relational, collective, and evolving in time. Both people and technology will likely change their behavior based on even a single interaction. For example, a clinician who follows an AI agent’s recommendation in writing a discharge note and an AI agent that is updated based on new data gathered during prior interactions with several different clinicians. A future consent infrastructure will account for this relationality, temporal dynamics, and collective as well as individual implications of consent.

2. Design choices should center the human agency of patients and clinicians

All clinicians I talked to spoke at length about how the AI-generated notes help them save time and focus their attention on being present with their patients instead of worrying about administrative tasks. All of them also shared that they read the AI-generated clinical note to check it for completeness and accuracy. It is important for them to be able to add important contextual data that the AI might have disregarded. They expressed their concerns, however, that not every clinician will do that. Design choices in the AI interface need to reflect that and learn from clinicians’ adjustments to the AI-generated notes. From the perspective of the clinicians I interviewed, the patient has almost no role in interacting with the AI. They will trust their clinician and the institution that any AI system they are using is compliant with high standards of care and accountability.

During the workshop scenario, participants got to interact with the social license of their clinician’s AI agent. They posed a number of questions related to the purpose, transparency, explainability, and implications of the use of the AI agent. One participant was concerned that patients and clinicians might not be as candid if they knew they were listened to by an AI system that could take their words out of context. Multiple participants expressed worries about their human agency to decline decisions made by an AI agent or using the AI agent altogether. They talked about the difference in the quality of healthcare for those who use AI vs. those who don’t have access to AI tools. Participants requested an option to negotiate or choose a meaningful alternative that is not a degradation of service.

3. AI infrastructure needs to center trust in deploying different forms of checks and balances

For the clinicians I interviewed, the biggest concern in using generative AI tools was not liability or errors leading to medical malpractice. It was a data breach, and how that might impact the livelihoods of their patients. One clinician shared their concern that “data is not going to be de-identified in order to be useful in tracking patients over time.” Furthermore, all clinicians shared that patients aren't going to know the implications of using AI and what could potentially go wrong.

Workshop participants shared that the social license will reduce the burden on them and may improve access to healthcare. Some workshop participants shared that the clinician can offer trust, accountability, and liability, which should be an integral part of a social license for an AI agent. Participants questioned which stakeholders the social license considers when answering questions about the AI agent. They talked about the need for different forms of checks and balances including human oversight, independent review, algorithmic audits by an accredited authority, legal infrastructure, standardization, clearly defined responsibilities, and accountability across all involved stakeholders. For example, participants talked about the need for a human in the loop for AI decision-making— and transparency about who those humans are.

Next steps

Speculative design methods and in-depth engagement with domain experts could create new ways for approaching and understanding complex challenges in generative AI adoption. I hope this work inspires AI builders to consider speculative design as a tool to co-create better products and services, reclaiming our ability to shape the futures we want. The preliminary findings from this work suggest new dimensions of improving transparency and safety for AI adoption in clinical health scenarios.

Watch the workshop recording here and reach out to me if you want to explore the findings in depth.

Thank you to everyone who joined the January 19th event and the truly interdisciplinary star panel - Gemma Petrie, Principal Researcher at Mozilla; Sophia Bazile, Futures Literacy and Foresight practitioner; and Richmond Y. Wong, an Assistant Professor of Digital Media at Georgia Tech's School of Literature, Media, and Communication; as well as my co-facilitator Tyler (T) Munyua, Wrangler Program Assistant on the Mozilla Festival Team. This work wouldn’t be possible without your collective contributions, willingness to engage in dialogue, and explore positive futures for generative AI adoption in healthcare.