Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Clearly define the primary use case before building a generative AI tool to ensure its relevance and usefulness.
-
•
High-quality, thoroughly vetted training data is foundational for trustworthy AI outputs, especially in regulated domains like FinTech.
-
•
Properly framing the initial state of the chatbot guides users to ask relevant questions and reduces irrelevant or out-of-scope interactions.
-
•
Loading indicators should be subtle; show loading dots before text appears and let the text rendering itself serve as feedback.
-
•
Supporting easy scrolling and navigation between prompt input and AI output helps users refine their queries effectively.
-
•
Error handling in AI chatbots shifts from fixing technical failures to coaching users on better prompt writing.
-
•
Transparency about AI accuracy, including disclaimers and source citations, is crucial to build and maintain user trust.
-
•
Accessibility must be integrated from the start, including keyboard navigation and screen reader support, especially given long conversational outputs.
-
•
User feedback mechanisms like thumbs up/down and source link interactions produce valuable data to iteratively improve chatbot performance.
-
•
Narrowing the chatbot’s scope at launch reduces risks and false outputs, with future plans to expand question domains carefully.
Notable Quotes
"You need to be very clear on the primary use case and what this tool will help someone do."
"If you have any doubts about the quality of the training data, do not proceed. Do not pass, go."
"When someone arrives at this tool, how do they know what to use it for? Framing the interaction is key."
"Loading dots should only appear before any letters show up; the text appearing itself acts as a loading indicator."
"People often forget what they typed and want to scroll back to their prompt to refine it."
"Error states are less about technical errors and more about helping people write prompts effectively."
"Being transparent about the tool’s accuracy and limitations is especially crucial in FinTech due to regulations."
"Every single output should have at least three source links to allow users to verify information."
"We made sure every person could navigate the chatbot using a keyboard alone to support accessibility."
"People were asking questions they might never have emailed us about, decreasing friction to learn more."
Dig deeper—ask the Rosenbot:















More Videos

"We changed the language to revolve around assumptions, hypotheses, experiments, and making sense of data."
Monty HammontreeThe Future of UX Research (Videoconference)
December 3, 2020

"Will you become an advocate for the world’s most vulnerable voices muffled in complex systems?"
Liz EbengoThe Burden on Children: The Cost of Insufficient Post-Conflict Services and Pathways Forward
December 4, 2024

"Staying in low fidelity throughout early concept development lets you focus strictly on the problem and solution space."
Billy CarlsonTips to Utilize Wireframes to Tell an Effective Product Story
June 6, 2023

"Nothing about us without us means bringing people in throughout the research and design processes."
Saara Kamppari-MillerInclusive Design is DesignOps
September 29, 2021

"People care so much about titles, but honestly, I’d prefer the star-bellied sneak on the business card."
Ian SwinsonDesigning and Driving UX Careers
June 8, 2016

"Working across multiple teams all the time is normal for content designers, but that doesn’t mean it’s the best or the only way."
Jonathon ColmanHow to Maximize the Impact of Content Design
January 8, 2024

"Google Design Sprint is the best inception strategy I know for design thinking because you learn without thinking about learning."
Maria SkaadenContinuous Design: One eye on the horizon and the other on the next wave
November 8, 2018

"I learned from Molly that the lunatic asylum is now called the Austin State Hospital, and I've encoded that in a Python script."
Bryce Benton[Demo] AI-powered UX enhancement: Aligning GitHub documentation with USWDS at Austin Public Library
June 4, 2024

"This report is really about finding clues to make smarter career decisions, not necessarily about giving you definitive answers."
Marc FonteijnFirst Insights from the 2025 Service Design Salary(+) Report
December 4, 2024