Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
This video is featured in the AI and UX playlist and 1 more.
Summary
Enthusiasm for AI tools, especially large language models like ChatGPT, is everywhere, but what does it actually look like to deliver large-scale user-facing experiences using these tools in a production environment? Clearly they're powerful, but what do they need to make them work reliably and at scale? In this session, Sarah provides a perspective on some of the information architecture and user experience infrastructure organizations need to effectively leverage AI. She also shares three AI experiences currently live on Microsoft Learn: An interactive assistant that helps users post high-quality questions to a community forum A tool that dynamically creates learning plans based on goals the user shares A training assistant that clarifies, defines, and guides learners while they study Through lessons learned from shipping these experiences over the last two years, UXers, IAs, and PMs will come away with a better sense of what they might need to make these hyped-up technologies work in real life.
Key Insights
-
•
Everything chatbots are overly ambiguous and difficult to optimize effectively.
-
•
Targeted AI applications tailored to specific user tasks work better and reduce risk.
-
•
The 'ambiguity footprint' helps product teams assess AI feature complexity along multiple axes.
-
•
Application context and whether AI features are critical or complementary impacts their ambiguity.
-
•
Visible AI interfaces set different user expectations compared to subtle or invisible AI features.
-
•
Prompt design strongly shapes AI behavior, even with similar interfaces delivering very different outputs.
-
•
Dynamic context injection into models adds power but significantly increases development complexity.
-
•
Consistent, thorough evaluation is essential but often neglected in AI application development.
-
•
Data privacy and ethical considerations restrict access to usage data, impeding evaluation efforts.
-
•
Incrementally building AI capabilities on less ambiguous features trains organizational muscles needed for more complex AI.
Notable Quotes
"You’re building three apps in a trench coat with a kind of iffy interface slapped on top of it."
"Chat really isn’t necessarily the best interface for lots of user tasks."
"We tend to see PMs and designers converging on a single everything chatbot, which I find insufficient."
"Ambiguity is inherent when working with AI, but that doesn’t mean you have to accept all of it."
"If you haven’t planned for evaluation, you end up eyeballing the results, which absolutely does not work."
"Responsible AI practices and legal reviews at Microsoft saved us from launching dangerous ambiguous features."
"The bigger and more ambiguous is not always better in AI applications."
"Very similar interfaces can conceal extremely different AI prompts, which shape the outputs."
"Dynamic context is more powerful but adds a ton of stuff to build, making AI development trickier."
"Many organizations just ask around and call it good when evaluating AI models, which is not sufficient."
Or choose a question:
More Videos
"Complexity is not a barrier to accessibility – even the most complicated games have been made accessible."
Samuel ProulxInvisible barriers: Why accessible service design can’t be an afterthought
December 3, 2024
"We created an onboarding camp that helped new product team members get up to speed quickly, and then replicated that across groups."
Courtney KaplanTaking it to the next level: Career paths in DesignOps
November 8, 2018
"Our aim as design leaders is not only growth but to be an essential part of the company’s strategy."
Gonzalo GoyanesDesign ROI: Cover a Little, Get a Lot
September 8, 2022
"We went back to the beginning of design thinking to stay connected and learn how to shape the future."
Kim Fellman CohenMeasuring the Designer Experience
October 23, 2019
"Arnstein’s ladder reduces participation to delegating decision-making power, ignoring safety, well-being, or satisfaction."
Sarah FathallahA Typology of Participation in Participatory Research
March 28, 2023
"If participants want us to delete their video, we’re going to delete it, going above legal requirements."
Theresa MarwahHow Atlassian is Operationalizing Respect in Research (Videoconference)
February 27, 2020
"Democratization is the Trojan Horse that poses as fairness for all but is really a cover for the devaluation of professional researchers."
Aras Bilgen Ari ZelmanowResearch Democratization: A Debate
March 29, 2023
"I’m worried you aren’t up to it, frankly."
Dan WillisEnterprise Storytelling Sessions
June 8, 2017
"Only work with the companies that are willing to invest the time needed for meaningful transformation."
Ted Booth Sam Ladner Fredrik Matheson Russ UngerDiscussion
June 8, 2016