Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist and 1 more.
Summary
Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.
Key Insights
-
•
Everything chatbots are overly ambiguous and difficult to optimize effectively.
-
•
Targeted AI applications tailored to specific user tasks work better and reduce risk.
-
•
The 'ambiguity footprint' helps product teams assess AI feature complexity along multiple axes.
-
•
Application context and whether AI features are critical or complementary impacts their ambiguity.
-
•
Visible AI interfaces set different user expectations compared to subtle or invisible AI features.
-
•
Prompt design strongly shapes AI behavior, even with similar interfaces delivering very different outputs.
-
•
Dynamic context injection into models adds power but significantly increases development complexity.
-
•
Consistent, thorough evaluation is essential but often neglected in AI application development.
-
•
Data privacy and ethical considerations restrict access to usage data, impeding evaluation efforts.
-
•
Incrementally building AI capabilities on less ambiguous features trains organizational muscles needed for more complex AI.
Notable Quotes
"You’re building three apps in a trench coat with a kind of iffy interface slapped on top of it."
"Chat really isn’t necessarily the best interface for lots of user tasks."
"We tend to see PMs and designers converging on a single everything chatbot, which I find insufficient."
"Ambiguity is inherent when working with AI, but that doesn’t mean you have to accept all of it."
"If you haven’t planned for evaluation, you end up eyeballing the results, which absolutely does not work."
"Responsible AI practices and legal reviews at Microsoft saved us from launching dangerous ambiguous features."
"The bigger and more ambiguous is not always better in AI applications."
"Very similar interfaces can conceal extremely different AI prompts, which shape the outputs."
"Dynamic context is more powerful but adds a ton of stuff to build, making AI development trickier."
"Many organizations just ask around and call it good when evaluating AI models, which is not sufficient."
Or choose a question:
More Videos
"Data shifts the conversation from I think to I know, but bad data can be dangerous without the right skepticism."
Steve Portigal Chris Chapo Kelly Goto Christian RohrerDiscussion
May 13, 2015
"Connecting new learning initiatives to a career framework that everyone already believes in makes adoption faster and easier."
Jess GrecoCreating a Basis for Change: Scaling Design Maturity
June 8, 2022
"The Wright siblings had a sister, Catherine, who was a full participant and should also be remembered."
Dan WardFailure Friday #1 with Dan Ward
February 7, 2025
"We’re builders, but we don’t always get to control what or why we build — still, we have responsibility."
Husani OakleyBias Towards Action: Building Teams that Build Work
June 14, 2018
"It wasn’t obvious the difference between AI generating category ideas and validating categories until we tried both."
Karen McGrane Jeff EatonAI for Information Architects: Are the robots coming for our jobs?
November 21, 2024
"The plan always changes, but we need to be ready."
Kristin SkinnerOpening Keynote: Org Design for Design Orgs
November 6, 2017
"Turning the company’s view of design from tactical to strategic is like turning a super tanker—it takes a long, wide arc."
Alfred KahnA Seat at the Table: Making Your Team a Strategic Partner
November 29, 2023
"Sometimes the most annoying small problems are the best places to start to build momentum."
Liza Pemstein Jane DavisScaling Research Via an Ops First Model at Clever
March 27, 2023
"As you grow as a leader, you spend less time doing research and more time being called upon as a researcher."
Nalini P. KotamrajuTwo Jobs in One: Being a “Leader who is a Researcher” and a “Researcher who is a Leader"
March 10, 2021