This video is featured in the AI and UX playlist.
Summary
AI-enabled systems that are responsible and human-centered, will be powerful partners to humans. Making systems that people are willing to be responsible for, and that are trustworthy to those using them enables that partnership. Carol will share guidance for operationalizing the work of making responsible human-centered AI systems based on UX research. She will share methods for UX teams to support bias identification, prevent harm, and support human-machine teaming through design of appropriate evidence of system capabilities and integrity through interaction design. Once these dynamic systems are out in the world, critical oversight activities are needed for AI systems to continue to be effective. This session will introduce each of these complex topics and provide references for further exploration of these exciting issues.
Key Insights
-
•
Responsible AI must keep humans in ultimate control, ensuring accountability and trust.
-
•
AI systems reflect inherent human biases present in training data and algorithms, making complete objectivity impossible.
-
•
Speculative design and harm anticipation help prevent or mitigate unintended negative consequences in AI systems.
-
•
Human-machine teaming requires clear responsibilities, transparency, and trust calibration to avoid overtrust or distrust.
-
•
Continuous and critical oversight is essential to manage AI's dynamic, evolving nature and changing contexts over time.
-
•
Diverse teams with psychological safety are more likely to spot biases, innovate, and engage in ethical conversations.
-
•
Technical ethics frameworks like the Montreal Declaration enable teams to navigate ethical dilemmas beyond personal opinions.
-
•
Real-world AI systems are dynamic and can change with retraining and new data; stable expectations are unrealistic.
-
•
Designing for usability includes making unsafe actions difficult and safe actions easy, supporting safe human control.
-
•
AI systems must allow humans to override and unplug them, especially in high-stakes decisions involving life, health, and reputation.
Notable Quotes
"Responsible systems are systems that keep humans in control."
"Data is a function of our history; it is not inherently neutral and always flawed."
"We want to reduce unintended or harmful bias and make sure we are aware of that bias."
"AI will ensure appropriate human judgment, not replace it."
"Trust is personal, complex, and transient; it cannot be easily measured."
"Humans are still better at many activities and we need to prioritize for that."
"If you don’t understand how the system works, why would you use it or want it critical to your business?"
"Speculation is a key aspect of this work to keep people safe and prevent harm."
"These systems aren’t stable like the old software on a CD; they are dynamic and constantly evolving."
"Adopting technology ethics gives teams a way to talk about issues rather than just opinions."
Or choose a question:
More Videos
"The 1500 are designers, we don't distinguish strictly between UX or visual or industrial—they all bring design to the org."
Adam Cutler Karen Pascoe Ian Swinson Susan WorthmanDiscussion
June 8, 2016
"If you’re doing a lot of work that’s not in your job description, you might actually be doing leadership."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023
"If you haven’t designed who your teams are and who your players are, expecting people to comply with standards won’t work."
Lisa WelchmanCleaning Up Our Mess: Digital Governance for Designers
June 14, 2018
"We need to rethink how our cities are designed and function."
Vincent BrathwaiteOpener: Past, Present, and Future—Closing the Racial Divide in Design Teams
October 22, 2020
"If you forget the individual, you cut out psychological safety, and that’s the foundation of strong teams."
Brenna FallonLearning Over Outcomes
October 24, 2019
"Treat identities as elastic, not as fixed personas, because people’s needs and roles are complex and changing."
Tricia WangSpatial Collapse: Designing for Emergent Culture
January 8, 2024
"Clusters describing demographics and behaviors alone were not enough; we needed emotional personas to find value propositions."
Edgar Anzaldua MorenoUsing Research to Determine Unique Value Proposition
March 11, 2021
"Successful operations must have a well-laid-out and transparent process and documentation, but that process must become invisible in its application."
Designing Systems at Scale
November 7, 2018
"Ethics evolve faster than law; just because something is legal doesn’t mean it’s ethical."
Erin WeigelGet Your Whole Team Testing to Design for Impact
July 24, 2024