Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist and 1 more.
Summary
Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.
Key Insights
-
•
Everything chatbots are overly ambiguous and difficult to optimize effectively.
-
•
Targeted AI applications tailored to specific user tasks work better and reduce risk.
-
•
The 'ambiguity footprint' helps product teams assess AI feature complexity along multiple axes.
-
•
Application context and whether AI features are critical or complementary impacts their ambiguity.
-
•
Visible AI interfaces set different user expectations compared to subtle or invisible AI features.
-
•
Prompt design strongly shapes AI behavior, even with similar interfaces delivering very different outputs.
-
•
Dynamic context injection into models adds power but significantly increases development complexity.
-
•
Consistent, thorough evaluation is essential but often neglected in AI application development.
-
•
Data privacy and ethical considerations restrict access to usage data, impeding evaluation efforts.
-
•
Incrementally building AI capabilities on less ambiguous features trains organizational muscles needed for more complex AI.
Notable Quotes
"You’re building three apps in a trench coat with a kind of iffy interface slapped on top of it."
"Chat really isn’t necessarily the best interface for lots of user tasks."
"We tend to see PMs and designers converging on a single everything chatbot, which I find insufficient."
"Ambiguity is inherent when working with AI, but that doesn’t mean you have to accept all of it."
"If you haven’t planned for evaluation, you end up eyeballing the results, which absolutely does not work."
"Responsible AI practices and legal reviews at Microsoft saved us from launching dangerous ambiguous features."
"The bigger and more ambiguous is not always better in AI applications."
"Very similar interfaces can conceal extremely different AI prompts, which shape the outputs."
"Dynamic context is more powerful but adds a ton of stuff to build, making AI development trickier."
"Many organizations just ask around and call it good when evaluating AI models, which is not sufficient."
Or choose a question:
More Videos
"Engineers must tell somebody if their judgment is overruled and it endangers life or property—you can’t just walk away."
Ethics in Tech Education: Designing to Provide Opportunity for All
June 14, 2018
"Defining a cohesive user experience when dealing with mergers and acquisitions is something many of us have dealt with."
Uday GajendarTheme 1: Introduction
June 9, 2021
"Often we end up hacking general-purpose tools rather than having ones designed for our problem space needs."
Andrea GallagherThe Problem Space (Videoconference)
May 16, 2019
"Fortunately, that was terrible. Let’s try the other word."
Dan WillisEnterprise Storytelling Sessions
June 8, 2017
"The boss’s son was no longer the default backup boss once rules were written down and accessible."
Sam LadnerHow Research Can Drive Strategic Foresight
March 9, 2022
"You can’t do every project that comes in; you need a meaningful way to compare and prioritize."
George Zhang Molly StevensUX Research Excellence Framework
March 11, 2021
"Conflict management is a design skill essential for handling high-stakes conversations triggered by research insights."
Sara LogelYour Colleagues are Your Users Too
March 29, 2023
"We have to link user needs to the bottom line to demonstrate our value as researchers."
Dr. Jamika D. Burge Nick Fine Alexandra Jayeun Lee Greg Nudelman Bo WangHow UX researchers can partner with (and not be replaced by) AI [Advancing Research Community Workshop Series](Videoconference)
August 31, 2023
"AI can help spot missing risks and gaps in project plans and communication strategies."
Louis Rosenfeld Billy Carlson Jon Fukuda Maria TaylorHow AI will Change DesignOps Tooling
October 3, 2023