Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Clearly define the primary use case before building a generative AI tool to ensure its relevance and usefulness.
-
•
High-quality, thoroughly vetted training data is foundational for trustworthy AI outputs, especially in regulated domains like FinTech.
-
•
Properly framing the initial state of the chatbot guides users to ask relevant questions and reduces irrelevant or out-of-scope interactions.
-
•
Loading indicators should be subtle; show loading dots before text appears and let the text rendering itself serve as feedback.
-
•
Supporting easy scrolling and navigation between prompt input and AI output helps users refine their queries effectively.
-
•
Error handling in AI chatbots shifts from fixing technical failures to coaching users on better prompt writing.
-
•
Transparency about AI accuracy, including disclaimers and source citations, is crucial to build and maintain user trust.
-
•
Accessibility must be integrated from the start, including keyboard navigation and screen reader support, especially given long conversational outputs.
-
•
User feedback mechanisms like thumbs up/down and source link interactions produce valuable data to iteratively improve chatbot performance.
-
•
Narrowing the chatbot’s scope at launch reduces risks and false outputs, with future plans to expand question domains carefully.
Notable Quotes
"You need to be very clear on the primary use case and what this tool will help someone do."
"If you have any doubts about the quality of the training data, do not proceed. Do not pass, go."
"When someone arrives at this tool, how do they know what to use it for? Framing the interaction is key."
"Loading dots should only appear before any letters show up; the text appearing itself acts as a loading indicator."
"People often forget what they typed and want to scroll back to their prompt to refine it."
"Error states are less about technical errors and more about helping people write prompts effectively."
"Being transparent about the tool’s accuracy and limitations is especially crucial in FinTech due to regulations."
"Every single output should have at least three source links to allow users to verify information."
"We made sure every person could navigate the chatbot using a keyboard alone to support accessibility."
"People were asking questions they might never have emailed us about, decreasing friction to learn more."
Or choose a question:
More Videos
"The blueprint is really within all of us — all the things that have brought us to this moment should come to our teams."
Jennifer KanyamibwaCreating the Blueprint: Growing and Building Design Teams
November 8, 2018
"Maintaining a repo often requires 100% manual participation from researchers and that can be hard to enforce and monitor."
Brigette Metzler Dana ChrisfieldResearch Repositories: A global project by the ResearchOps Community (Videoconference)
August 27, 2020
"Project delays are frequently due to organizational and cultural issues rather than actions of the project team."
Carl TurnerYou Can Do This: Understand and Solve Organizational Problems to Jumpstart a Dead Project
March 28, 2023
"Trust, facilitation, change, and complexity—if you wrap your mind around those, you’ll get reasonably far."
John Mortimer Milan Guenther Lucy Ellis Patrick QuattlebaumPanel Discussion
December 3, 2024
"An engaged designer is a retained designer."
Dante GuintuHow to Crush the Talent Crunch
September 8, 2022
"Sustainability is dead because we don’t know what to sustain; we need to create flourishing communities instead."
Richard BuchananCreativity and Principles in the Flourishing Enterprise
June 15, 2018
"Every speaker before me had these pristine backgrounds. I’m a slob and feeling really self-conscious about it."
Dan WillisTheme 3: Intro
January 8, 2024
"We do experiments to learn, and if we learn something, then it’s a successful experiment."
Dan WardFailure Friday #1 with Dan Ward
February 7, 2025
"Researchers are positioned at the nexus of insight and action."
Chris GeisonTheme Two Intro
March 28, 2023