This video is featured in the AI and UX playlist.
Summary
AI-based modeling plays an ever-increasing role in our daily lives. Our personal data and preferences are being used to build models that serve up anything from movie recommendations to medical services with a precision designed to make life decisions easier. But, what happens when those models are not precise enough or are unusable because of incomplete data, incorrect interpretations, or questionable ethics? What's more, how do we reconcile the dynamic and iterative nature of AI models (which are developed by a few) with the expansive nature of the human experience (which is defined by many) so that we optimize and scale UX for AI applications equitably? This talk will push our understanding of the sociotechnical systems that surround AI technologies, including those developing the models (i.e., model creators) and those for whom the models are created (i.e., model customers). What considerations should we keep top of mind? How might the research practice evolve to help us critique and deliver inclusive and responsible AI systems?
Key Insights
-
•
Healthcare AI models that adjust for race can unintentionally perpetuate systemic racial disparities.
-
•
AI hiring tools trained on historical data risk reinforcing bias against underrepresented groups.
-
•
Dark patterns in AI-driven interfaces manipulate users into unintended behaviors, eroding trust.
-
•
Deepfakes use AI-generated media to impersonate and deceive, threatening democratic processes and reputations.
-
•
Facial recognition algorithms consistently perform worse on darker-skinned females, evidenced by Joy's 'coded bias' research.
-
•
AI biases stem not only from datasets and code but also from entrenched human and systemic prejudices.
-
•
Ethical lapses in human research, like the Tuskegee Syphilis Study, highlight the non-negotiable principle to do no harm.
-
•
Women are severely underrepresented in AI technical and research roles, impacting AI inclusivity.
-
•
New data privacy laws (GDPR, CPRA) enforce accountability in how organizations collect and use customer data.
-
•
UX researchers have a pivotal role in questioning training data, testing for dark patterns, and ensuring inclusive participant representation.
Notable Quotes
"Imagine going to the hospital for an emergency but the attending doctor doesn’t believe you’re in pain because your symptoms don’t match the AI’s expected outcomes."
"Algorithms that adjust for race, despite evidence that race is not a reliable proxy for genetic differences, actually embed and advance racial disparities in health."
"Candidates don’t actually trust AI algorithms over humans when it comes to hiring decisions."
"Dark patterns aim to get you to behave in a certain way or make uninformed decisions, often to boost customer metrics."
"Deepfakes can distort facts, spread disinformation, and inflict psychological harm on victims."
"All top commercially available facial recognition software performs worse on darker females."
"Biases that are out of sight are biases that are out of mind; human and systemic biases deeply influence AI outcomes."
"Doing no harm is a core tenet of user experience research, requiring us to treat participants with beneficence and justice."
"Only 20% of machine learning technical roles and 12% of AI researchers globally are women."
"It isn’t possible to implement life-changing AI without the human component."
Or choose a question:
More Videos
"All data has bias, problems, and limitations; there is no perfectly clean data, only varying degrees of quality."
Jemma Ahmed Steve Carrod Chris Geison Dr. Shadi Janansefat Christopher NashDemocratization: Working with it, not against it [Advancing Research Community Workshop Series]
July 24, 2024
"You need sponsorship from senior leadership but also buy-in from all corners of the company to succeed."
Nina JurcicThe Design System Rollercoaster: From Enabler and Bottleneck to Catalyst for Change
October 3, 2023
"If your organization is healthy, the haters pretty quickly get marginalized when everyone else is excited about the design system."
Nathan Curtis Nalini P. Kotamraju Jack Moffett Dawn ResselDiscussion
June 9, 2016
"If you can measure a thing doesn’t mean you should measure it."
Saara Kamppari-Miller Nicole Bergstrom Shashi JainKey Metrics: Comparing Three Letter Acronym Metrics That Include the Word “Key”
November 13, 2024
"The sum of parts is not always the whole — we lost sight of the mental model in our employee record feature."
Malini RaoLessons Learned from a 4-year Product Re-platforming Journey
June 9, 2021
"85% of people were leaving after just looking at one page on the old site—that was a huge problem."
Mackenzie Cockram Sara Branco Cunha Ian FranklinIntegrating Qualitative and Quantitative Research from Discovery to Live
December 16, 2022
"Day one of Design at Scale is going to be about ensuring craft on a massive scale."
Bria AlexanderOpening Remarks
June 9, 2021
"Growth boards are a lean governance tool borrowed from the mindset of venture capital."
Jackie HoLead Effectively While Preserving Team Autonomy with Growth Boards
January 8, 2024
"Who decides what code is good for, what humans are good at, and what nature is good at?"
Dan HillDesigning for the infrastructures of everyday life
June 4, 2024