Rosenverse

This video is only accessible to Gold members. Log in or register for a free Gold Trial Account to watch.

Log in Register

Most conference talks are accessible to Gold members, while community videos are generally available to all logged-in members.

[Case study] Journeying toward AI-assisted documentation in healthcare
Gold
Wednesday, June 5, 2024 • Designing with AI 2024
Share the love for this talk
[Case study] Journeying toward AI-assisted documentation in healthcare
Speakers: Jennifer Kong
Link:

Summary

Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry. At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter! However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system. Takeaways From this session, you can expect to learn the following: Developing AI design in healthcare requires close collaboration between end users and your data science team Piloting GenAI solutions may be more effective than traditional prototyping Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors

Key Insights

  • Generative AI can reduce healthcare documentation time by up to 64% in low-risk chat encounter scenarios.

  • Limiting AI applications to verbal and text-based interactions reduces risk compared to video or phone encounters where non-verbal cues matter.

  • LLMs excel at summarization but struggle with capturing exact medical details and unspoken actions.

  • Balancing model accuracy and latency is critical to maintain business value through time savings.

  • Incremental, imperfect pilot releases provide better learnings than traditional iterative prototyping with AI tools.

  • Implicit user feedback mechanisms, like measuring the edit rate of AI-generated notes, help assess output quality without disrupting workflows.

  • User excitement about AI tools can decline due to cognitive biases such as novelty wearing off, frequency bias toward errors, and expectation bias.

  • Operational metrics, like note quality affecting performance reviews, can shape user attitudes toward AI tools more than raw efficiency gains.

  • Educating users consistently on AI’s augmentative role and setting realistic expectations improves tool adoption and satisfaction.

  • Human-centered design involving early collaboration between designers, data scientists, researchers, and quality assurance is essential for effective AI integration in healthcare.

Notable Quotes

"Physicians can spend up to two hours on documentation for every one hour of patient interaction."

"We released several incremental but imperfect pilot solutions to inform usability and strategy rather than relying on typical prototyping."

"The current LLM models are great at summarizing, but they’re not so great at capturing exact details."

"AI is not a comprehensive silver bullet solution; we limited our scope to capturing notes for verbal interactions only."

"Our UI focused on saving time, making the workflow one button and enabling manual edits and regeneration for error recovery."

"We created an edit rate metric—the fraction of human-added characters—to measure how much human editing was needed."

"Users initially were excited about AI, but six months later, there was more dissatisfaction and frustration despite quantitative time savings."

"Errors that were funny at first became annoying, leading to frequency bias because users felt errors were more frequent than before."

"Employees were graded on note quality, and their scores declined by about 10% after using the AI tool, impacting morale."

"The plot twist was that AI was causing new problems impacting performance, morale, and satisfaction, showing people are the real key."

Ask the Rosenbot
Sam Proulx
To Boldly Go: The New Frontiers of Accessibility
2022 • Design at Scale 2022
Gold
Feleesha Sterling
Building a Rapid Research Program (Videoconference)
2023 • Enterprise Community
Christian Bason
Innovating With People: Unleashing the Potential of Civic Design
2021 • Civic Design 2021
Gold
Emily Williams
When UX Research and Institutional Racism Collide: A Case Study
2021 • Advancing Research 2021
Gold
Kristen Guth, Ph.D.
Out of the FOG: A Non-traditional Research Approach to Alignment
2023 • Advancing Research 2023
Gold
Christian Crumlish
Introduction by our Conference Chair
2022 • Design in Product 2022
Gold
Johanna Kollmann
Insights-Driven Product Strategy: Get your Research to Count
2022 • Design in Product 2022
Gold
We're Here for the Humans
2017 • Enterprise Experience 2017
Gold
George Zhang
UX Research Excellence Framework
2021 • Advancing Research 2021
Gold
Lada Gorlenko
Theme 1: Intro
2024 • Enterprise Experience 2020
Gold
Jazz Improvisation as a Model for Team Collaboration
2017 • DesignOps Summit 2017
Gold
John Calhoun
Have we Reached Our Peak? Spotting the Next Mountain For DesignOps to Climb
2021 • DesignOps Summit 2021
Gold
Peter Merholz
Design at Scale is People!
2021 • Design at Scale 2021
Gold
Nathan Curtis
Design Systems for Us: How Many One-Source(s)-of-Truth Are Enough? (Videoconference)
2019 • DesignOps Community
Dan Willis
Enterprise Storytelling Sessions
2015 • Enterprise UX 2015
Gold
Tricia Wang
SCALE: Discussion
2018 • Enterprise Experience 2018
Gold

More Videos

Alex Hurworth

"The time for debate has passed; we need to act decisively and collaboratively."

Alex Hurworth Bonnie John Fahd Arshad Antoine Marin

Designing a Contact Tracing App for Universal Access

October 23, 2020

Laine Riley Prokay

"Investing in new practitioners is mutually rewarding; we learn from their fresh perspectives and reassess what we know ourselves."

Laine Riley Prokay Lisa Gordon

Carving a Path for Early Career DesignOps Practitioners

September 9, 2022

Eniola Oluwole

"Once cash prizes were gone, people stopped feeling ownership of the design system."

Eniola Oluwole

Lessons From the DesignOps Journey of the World's Largest Travel Site

October 24, 2019

Nathan Shedroff

"Strategy should be continuous, circular, and evolve with every action the organization takes."

Nathan Shedroff

Double Your Mileage: Use Your Research Strategically

March 31, 2020

Sam Proulx

"Accessibility is an ongoing process where we iterate, improve, and expand, and mobile-first makes the journey easier."

Sam Proulx

Mobile Accessibility: Why Moving Accessibility Beyond the Desktop is Critical in a Mobile-first World

November 17, 2022

Feleesha Sterling

"Rapid research is a flexible framework for quickly executing UX research for fast and often tactical or evaluative design decisions."

Feleesha Sterling

Building a Rapid Research Program (Videoconference)

May 18, 2023

Neil Barrie

"Starting with specific, meaningful priority communities rather than homogenized personas leads to powerful breakthroughs."

Neil Barrie

Widening the Aperture: The Case for Taking a Broader Lens to the Dialogue between Products and Culture

March 25, 2024

John Devanney

"You can’t measure long-term customer relationship value with short-term KPIs."

John Devanney

The Design Management Office

November 6, 2017

Katy Mogal

"Surveys and logs tell us what users do, but qualitative tells us why."

Katy Mogal

But Do Your Insights Scale?

March 12, 2021