This video is featured in the AI and UX playlist.
Summary
AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.
Key Insights
-
•
Responsible AI must keep humans in ultimate control, ensuring accountability and trust.
-
•
AI systems reflect inherent human biases present in training data and algorithms, making complete objectivity impossible.
-
•
Speculative design and harm anticipation help prevent or mitigate unintended negative consequences in AI systems.
-
•
Human-machine teaming requires clear responsibilities, transparency, and trust calibration to avoid overtrust or distrust.
-
•
Continuous and critical oversight is essential to manage AI's dynamic, evolving nature and changing contexts over time.
-
•
Diverse teams with psychological safety are more likely to spot biases, innovate, and engage in ethical conversations.
-
•
Technical ethics frameworks like the Montreal Declaration enable teams to navigate ethical dilemmas beyond personal opinions.
-
•
Real-world AI systems are dynamic and can change with retraining and new data; stable expectations are unrealistic.
-
•
Designing for usability includes making unsafe actions difficult and safe actions easy, supporting safe human control.
-
•
AI systems must allow humans to override and unplug them, especially in high-stakes decisions involving life, health, and reputation.
Notable Quotes
"Responsible systems are systems that keep humans in control."
"Data is a function of our history; it is not inherently neutral and always flawed."
"We want to reduce unintended or harmful bias and make sure we are aware of that bias."
"AI will ensure appropriate human judgment, not replace it."
"Trust is personal, complex, and transient; it cannot be easily measured."
"Humans are still better at many activities and we need to prioritize for that."
"If you don’t understand how the system works, why would you use it or want it critical to your business?"
"Speculation is a key aspect of this work to keep people safe and prevent harm."
"These systems aren’t stable like the old software on a CD; they are dynamic and constantly evolving."
"Adopting technology ethics gives teams a way to talk about issues rather than just opinions."
Dig deeper—ask the Rosenbot:















More Videos

"Dave Malouf dropped that term Design Ops on me in 2017."
Bria Alexander Louis RosenfeldWelcome
September 8, 2022

"Pharmaceutical research is all about failing ideas as quickly and cheaply as possible."
Mike OrenWhy Pharmaceutical's Research Model Should Replace Design Thinking
March 28, 2023

"You would never implement an algorithm on a doctor without consulting them, yet in hospitality, workers face algorithmic managers daily."
Jodi ForlizziDesign and AI innovation
June 5, 2024

"If you ever feel uncomfortable or subjected to bullying, please let us know. It’s very important to keep the vibes positive."
Bria AlexanderOpening Remarks
October 3, 2023

"Facial coding AI isn't smart enough to know if someone is smiling because they're happy or uncomfortable."
Michael WeirMixed Methods and Behavioural Science (Videoconference)
May 26, 2023

"We created one-pagers — lightweight, skimmable documents — to help people from all areas quickly understand research insights."
Molly FargotsteinMultipurpose Communication & UX Research Marketing (Videoconference)
September 12, 2019

"Tools and practices are not neutral; they are cross-contextual and situational."
Verónica Urzúa Jorge MontielThe B-side of the Research Impact
March 12, 2021

"I worked at an interactive information rich dynamic customer relationship management firm — it was my way of explaining what we did."
Husani OakleyTheme Two Intro
June 6, 2023

"You get real coded components and layouts that developers can drag, drop, and extend immediately without rebuilding."
George Abraham Stefan IvanovDesign Systems To-Go: Reimagining Developer Handoff, and Introducing App Builder (Part 2)
October 1, 2021