This video is featured in the AI and UX playlist.
Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
Healthcare AI models that adjust for race can unintentionally perpetuate systemic racial disparities.
-
•
AI hiring tools trained on historical data risk reinforcing bias against underrepresented groups.
-
•
Dark patterns in AI-driven interfaces manipulate users into unintended behaviors, eroding trust.
-
•
Deepfakes use AI-generated media to impersonate and deceive, threatening democratic processes and reputations.
-
•
Facial recognition algorithms consistently perform worse on darker-skinned females, evidenced by Joy's 'coded bias' research.
-
•
AI biases stem not only from datasets and code but also from entrenched human and systemic prejudices.
-
•
Ethical lapses in human research, like the Tuskegee Syphilis Study, highlight the non-negotiable principle to do no harm.
-
•
Women are severely underrepresented in AI technical and research roles, impacting AI inclusivity.
-
•
New data privacy laws (GDPR, CPRA) enforce accountability in how organizations collect and use customer data.
-
•
UX researchers have a pivotal role in questioning training data, testing for dark patterns, and ensuring inclusive participant representation.
Notable Quotes
"Imagine going to the hospital for an emergency but the attending doctor doesn’t believe you’re in pain because your symptoms don’t match the AI’s expected outcomes."
"Algorithms that adjust for race, despite evidence that race is not a reliable proxy for genetic differences, actually embed and advance racial disparities in health."
"Candidates don’t actually trust AI algorithms over humans when it comes to hiring decisions."
"Dark patterns aim to get you to behave in a certain way or make uninformed decisions, often to boost customer metrics."
"Deepfakes can distort facts, spread disinformation, and inflict psychological harm on victims."
"All top commercially available facial recognition software performs worse on darker females."
"Biases that are out of sight are biases that are out of mind; human and systemic biases deeply influence AI outcomes."
"Doing no harm is a core tenet of user experience research, requiring us to treat participants with beneficence and justice."
"Only 20% of machine learning technical roles and 12% of AI researchers globally are women."
"It isn’t possible to implement life-changing AI without the human component."
Or choose a question:
More Videos
"Can you show me your process, not just your portfolio? That shows me your real design thinking."
Adam Cutler Karen Pascoe Ian Swinson Susan WorthmanDiscussion
June 8, 2016
"Many product managers got their roles because they know the business or subject matter, but they don’t know how to manage product development."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023
"Digital is a system, not a project. It’s there all the time and you have to keep iterating on it."
Lisa WelchmanCleaning Up Our Mess: Digital Governance for Designers
June 14, 2018
"Every small action contributes to a larger impact in the fight against climate change."
Vincent BrathwaiteOpener: Past, Present, and Future—Closing the Racial Divide in Design Teams
October 22, 2020
"It matters what you build, but it matters more if you learn."
Brenna FallonLearning Over Outcomes
October 24, 2019
"Working from home during the pandemic is hard because it’s fun only when you can actually leave your home."
Tricia WangSpatial Collapse: Designing for Emergent Culture
January 8, 2024
"Creating hypotheses from pain points with measurable success criteria helped prioritize which to pursue."
Edgar Anzaldua MorenoUsing Research to Determine Unique Value Proposition
March 11, 2021
"Our brains are terrible at operating only on one type of information, whether object-oriented or context-oriented."
Designing Systems at Scale
November 7, 2018
"Most product teams work linearly, but systems thinking captures the real-world complexity of moving forward and sometimes stepping back."
Erin WeigelGet Your Whole Team Testing to Design for Impact
July 24, 2024