Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Clearly define the primary use case before building a generative AI tool to ensure its relevance and usefulness.
-
•
High-quality, thoroughly vetted training data is foundational for trustworthy AI outputs, especially in regulated domains like FinTech.
-
•
Properly framing the initial state of the chatbot guides users to ask relevant questions and reduces irrelevant or out-of-scope interactions.
-
•
Loading indicators should be subtle; show loading dots before text appears and let the text rendering itself serve as feedback.
-
•
Supporting easy scrolling and navigation between prompt input and AI output helps users refine their queries effectively.
-
•
Error handling in AI chatbots shifts from fixing technical failures to coaching users on better prompt writing.
-
•
Transparency about AI accuracy, including disclaimers and source citations, is crucial to build and maintain user trust.
-
•
Accessibility must be integrated from the start, including keyboard navigation and screen reader support, especially given long conversational outputs.
-
•
User feedback mechanisms like thumbs up/down and source link interactions produce valuable data to iteratively improve chatbot performance.
-
•
Narrowing the chatbot’s scope at launch reduces risks and false outputs, with future plans to expand question domains carefully.
Notable Quotes
"You need to be very clear on the primary use case and what this tool will help someone do."
"If you have any doubts about the quality of the training data, do not proceed. Do not pass, go."
"When someone arrives at this tool, how do they know what to use it for? Framing the interaction is key."
"Loading dots should only appear before any letters show up; the text appearing itself acts as a loading indicator."
"People often forget what they typed and want to scroll back to their prompt to refine it."
"Error states are less about technical errors and more about helping people write prompts effectively."
"Being transparent about the tool’s accuracy and limitations is especially crucial in FinTech due to regulations."
"Every single output should have at least three source links to allow users to verify information."
"We made sure every person could navigate the chatbot using a keyboard alone to support accessibility."
"People were asking questions they might never have emailed us about, decreasing friction to learn more."
Or choose a question:
More Videos
"Policies must evolve to reflect the urgency of our situation."
Alex Hurworth Bonnie John Fahd Arshad Antoine MarinDesigning a Contact Tracing App for Universal Access
October 23, 2020
"We wanted to actively promote the Design Ops discipline to more people, especially those not familiar with our craft."
Laine Riley Prokay Lisa GordonCarving a Path for Early Career DesignOps Practitioners
September 9, 2022
"Everyone wanted to know what was the official pattern and who was accountable for it."
Eniola OluwoleLessons From the DesignOps Journey of the World's Largest Travel Site
October 24, 2019
"Rather than aiming for the chief strategy officer directly, build relationships with junior strategists who are more accessible."
Nathan ShedroffDouble Your Mileage: Use Your Research Strategically
March 31, 2020
"Apple premiered mobile accessibility in a very exciting way with the iPhone 3GS and VoiceOver announcement at WWDC."
Sam ProulxMobile Accessibility: Why Moving Accessibility Beyond the Desktop is Critical in a Mobile-first World
November 17, 2022
"If your design team cannot act quickly on findings, rapid research is not the method for you."
Feleesha SterlingBuilding a Rapid Research Program (Videoconference)
May 18, 2023
"Triple win cultural roles speak to what a company commits to, what users need, and wider cultural tensions."
Neil BarrieWidening the Aperture: The Case for Taking a Broader Lens to the Dialogue between Products and Culture
March 25, 2024
"For many years, folks came to us with questions beyond design — about career paths, tools, skills, and intake management."
John DevanneyThe Design Management Office
November 6, 2017
"You need to understand stakeholders’ fears, motivations, and incentives to change hearts and minds."
Katy MogalBut Do Your Insights Scale?
March 12, 2021