Summary
It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards. The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response. You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes. Take-aways What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows The interactions that are unique to conversational AI experiences and the design patterns that work for them Common challenges in designing conversational AI experiences and how to overcome them
Key Insights
-
•
Clearly define the primary use case before building a generative AI tool to ensure its relevance and usefulness.
-
•
High-quality, thoroughly vetted training data is foundational for trustworthy AI outputs, especially in regulated domains like FinTech.
-
•
Properly framing the initial state of the chatbot guides users to ask relevant questions and reduces irrelevant or out-of-scope interactions.
-
•
Loading indicators should be subtle; show loading dots before text appears and let the text rendering itself serve as feedback.
-
•
Supporting easy scrolling and navigation between prompt input and AI output helps users refine their queries effectively.
-
•
Error handling in AI chatbots shifts from fixing technical failures to coaching users on better prompt writing.
-
•
Transparency about AI accuracy, including disclaimers and source citations, is crucial to build and maintain user trust.
-
•
Accessibility must be integrated from the start, including keyboard navigation and screen reader support, especially given long conversational outputs.
-
•
User feedback mechanisms like thumbs up/down and source link interactions produce valuable data to iteratively improve chatbot performance.
-
•
Narrowing the chatbot’s scope at launch reduces risks and false outputs, with future plans to expand question domains carefully.
Notable Quotes
"You need to be very clear on the primary use case and what this tool will help someone do."
"If you have any doubts about the quality of the training data, do not proceed. Do not pass, go."
"When someone arrives at this tool, how do they know what to use it for? Framing the interaction is key."
"Loading dots should only appear before any letters show up; the text appearing itself acts as a loading indicator."
"People often forget what they typed and want to scroll back to their prompt to refine it."
"Error states are less about technical errors and more about helping people write prompts effectively."
"Being transparent about the tool’s accuracy and limitations is especially crucial in FinTech due to regulations."
"Every single output should have at least three source links to allow users to verify information."
"We made sure every person could navigate the chatbot using a keyboard alone to support accessibility."
"People were asking questions they might never have emailed us about, decreasing friction to learn more."
Or choose a question:
More Videos
"The 1500 are designers, we don't distinguish strictly between UX or visual or industrial—they all bring design to the org."
Adam Cutler Karen Pascoe Ian Swinson Susan WorthmanDiscussion
June 8, 2016
"If you’re doing a lot of work that’s not in your job description, you might actually be doing leadership."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023
"Governance frameworks can facilitate whatever an organization wants to do, fast or slow, loose or tight."
Lisa WelchmanCleaning Up Our Mess: Digital Governance for Designers
June 14, 2018
"The time for action is now, and it must be collaborative."
Vincent BrathwaiteOpener: Past, Present, and Future—Closing the Racial Divide in Design Teams
October 22, 2020
"The squad model flopped for us after six months but created culture triads that stuck around."
Brenna FallonLearning Over Outcomes
October 24, 2019
"Factory owners manipulated people’s time so much that workers were afraid to carry a watch."
Tricia WangSpatial Collapse: Designing for Emergent Culture
January 8, 2024
"Creating hypotheses from pain points with measurable success criteria helped prioritize which to pursue."
Edgar Anzaldua MorenoUsing Research to Determine Unique Value Proposition
March 11, 2021
"The toy shouldn’t be the goal of play, but a tool or a process that unlocks the unlimited possibilities set forth before a child."
Designing Systems at Scale
November 7, 2018
"Ethics evolve faster than law; just because something is legal doesn’t mean it’s ethical."
Erin WeigelGet Your Whole Team Testing to Design for Impact
July 24, 2024