Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist and 1 more.
Summary
Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.
Key Insights
-
•
Everything chatbots are overly ambiguous and difficult to optimize effectively.
-
•
Targeted AI applications tailored to specific user tasks work better and reduce risk.
-
•
The 'ambiguity footprint' helps product teams assess AI feature complexity along multiple axes.
-
•
Application context and whether AI features are critical or complementary impacts their ambiguity.
-
•
Visible AI interfaces set different user expectations compared to subtle or invisible AI features.
-
•
Prompt design strongly shapes AI behavior, even with similar interfaces delivering very different outputs.
-
•
Dynamic context injection into models adds power but significantly increases development complexity.
-
•
Consistent, thorough evaluation is essential but often neglected in AI application development.
-
•
Data privacy and ethical considerations restrict access to usage data, impeding evaluation efforts.
-
•
Incrementally building AI capabilities on less ambiguous features trains organizational muscles needed for more complex AI.
Notable Quotes
"You’re building three apps in a trench coat with a kind of iffy interface slapped on top of it."
"Chat really isn’t necessarily the best interface for lots of user tasks."
"We tend to see PMs and designers converging on a single everything chatbot, which I find insufficient."
"Ambiguity is inherent when working with AI, but that doesn’t mean you have to accept all of it."
"If you haven’t planned for evaluation, you end up eyeballing the results, which absolutely does not work."
"Responsible AI practices and legal reviews at Microsoft saved us from launching dangerous ambiguous features."
"The bigger and more ambiguous is not always better in AI applications."
"Very similar interfaces can conceal extremely different AI prompts, which shape the outputs."
"Dynamic context is more powerful but adds a ton of stuff to build, making AI development trickier."
"Many organizations just ask around and call it good when evaluating AI models, which is not sufficient."
Dig deeper—ask the Rosenbot:
















More Videos

"Researchers submit questions that get combined into one survey, translated, then responses are translated back for analysis."
Wyatt HaymanGlobal Research Panels (Videoconference)
August 8, 2020

"Our team does not actually build the components consumed by product teams due to how Adobe's teams are structured."
PJ Buddhari Nate BaldwinMeet Spectrum, Adobe’s Design System
June 9, 2021

"Government often moves slowly and struggles to communicate future intent amid short-term sustainability concerns."
Sarah GallimoreInspire Progress with Artifacts from the Future
November 18, 2022

"We don’t get upset when users say one thing and do another, but we freak out when our leadership behaves that way."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023

"Intersectionality is a framework to describe compounding levels of discrimination people experience."
Dr. Jamika D. Burge Mansi GuptaAdvancing the Inclusion of Womxn in Research Practices (Videoconference)
September 15, 2022

"Product principles are minimal but help us say, oh, this is meeting that or not meeting that."
Amy MarquezINVEST: Discussion
June 15, 2018

"If you remove death in a game like Dark Souls, the game breaks — death is part of the challenge and fun."
Dane DeSutter Natalie Gedeon Deborah Hendersen Cheryl PlatzBeyond the Console: The rise of the Gamer Experience and how gaming will impact UX Research across industries (Videoconference)
May 17, 2024

"Play invites curiosity, and curiosity allows us to imagine and create new worlds."
Zariah CameronReDesigning Wellbeing for Equitable Care in the Workplace
September 23, 2024

"ADHD isn’t a deficit of attention, but trouble regulating it, which leads to hyperfocus on interesting tasks."
Jessica NorrisADHD: A DesignOps Superpower
September 9, 2022