Summary
Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry. At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter! However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system. Takeaways From this session, you can expect to learn the following: Developing AI design in healthcare requires close collaboration between end users and your data science team Piloting GenAI solutions may be more effective than traditional prototyping Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors
Key Insights
-
•
Generative AI can reduce healthcare documentation time by up to 64% in low-risk chat encounter scenarios.
-
•
Limiting AI applications to verbal and text-based interactions reduces risk compared to video or phone encounters where non-verbal cues matter.
-
•
LLMs excel at summarization but struggle with capturing exact medical details and unspoken actions.
-
•
Balancing model accuracy and latency is critical to maintain business value through time savings.
-
•
Incremental, imperfect pilot releases provide better learnings than traditional iterative prototyping with AI tools.
-
•
Implicit user feedback mechanisms, like measuring the edit rate of AI-generated notes, help assess output quality without disrupting workflows.
-
•
User excitement about AI tools can decline due to cognitive biases such as novelty wearing off, frequency bias toward errors, and expectation bias.
-
•
Operational metrics, like note quality affecting performance reviews, can shape user attitudes toward AI tools more than raw efficiency gains.
-
•
Educating users consistently on AI’s augmentative role and setting realistic expectations improves tool adoption and satisfaction.
-
•
Human-centered design involving early collaboration between designers, data scientists, researchers, and quality assurance is essential for effective AI integration in healthcare.
Notable Quotes
"Physicians can spend up to two hours on documentation for every one hour of patient interaction."
"We released several incremental but imperfect pilot solutions to inform usability and strategy rather than relying on typical prototyping."
"The current LLM models are great at summarizing, but they’re not so great at capturing exact details."
"AI is not a comprehensive silver bullet solution; we limited our scope to capturing notes for verbal interactions only."
"Our UI focused on saving time, making the workflow one button and enabling manual edits and regeneration for error recovery."
"We created an edit rate metric—the fraction of human-added characters—to measure how much human editing was needed."
"Users initially were excited about AI, but six months later, there was more dissatisfaction and frustration despite quantitative time savings."
"Errors that were funny at first became annoying, leading to frequency bias because users felt errors were more frequent than before."
"Employees were graded on note quality, and their scores declined by about 10% after using the AI tool, impacting morale."
"The plot twist was that AI was causing new problems impacting performance, morale, and satisfaction, showing people are the real key."
Or choose a question:
More Videos
"Telling your team’s story explains the culture of your team and how someone could fit into it."
Jennifer KanyamibwaCreating the Blueprint: Growing and Building Design Teams
November 8, 2018
"Most organizations are using a stack of tools like Google Drive, SharePoint, or Airtable, not mature repositories."
Brigette Metzler Dana ChrisfieldResearch Repositories: A global project by the ResearchOps Community (Videoconference)
August 27, 2020
"Sometimes you just have to do what you can and come back to it later when analyzing culture versus delivery demands."
Carl TurnerYou Can Do This: Understand and Solve Organizational Problems to Jumpstart a Dead Project
March 28, 2023
"Sometimes you have to make something wrong on purpose to learn and improve it with others’ help."
John Mortimer Milan Guenther Lucy Ellis Patrick QuattlebaumPanel Discussion
December 3, 2024
"Extended radio silence to candidates was unnerving and one of the unpleasant truths we heard from teammates."
Dante GuintuHow to Crush the Talent Crunch
September 8, 2022
"We tend to confuse the experience of people with user experience, and I’m less interested in user experience than in the experience of people."
Richard BuchananCreativity and Principles in the Flourishing Enterprise
June 15, 2018
"Melinda Belcher has had great success first as an outside consultant and then as an inside enterprise manager."
Dan WillisTheme 3: Intro
January 8, 2024
"Innovation is novelty with impact — three words to remember."
Dan WardFailure Friday #1 with Dan Ward
February 7, 2025
"We need to step out from informing decisions and into becoming changemakers."
Chris GeisonTheme Two Intro
March 28, 2023