This video is featured in the AI and UX playlist.
Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
Rushing AI deployment creates tech debt that compounds faster and causes more brand damage than traditional software issues.
-
•
DataWorks Plus facial recognition software caused a wrongful felony arrest due to untested bias and accuracy problems.
-
•
Multidisciplinary teams including legal, UX, ML engineers, researchers, domain experts, and ethicists are essential for ethical AI development.
-
•
Ethical AI requires asking pointed questions about data origin, bias testing, mitigation, ongoing monitoring, and user feedback.
-
•
Prototyping AI behavior against varied user personas and scenarios helps identify bias and technical flaws early.
-
•
Ethical stress testing simulates difficult scenarios (e.g., autonomous vehicle ethics) to verify AI alignment with values.
-
•
AI systems continuously learn from user input and environment, so ethical iteration is needed to prevent degradation or bias amplification.
-
•
MidJourney’s AI image generation reflects data biases, repeatedly stereotyped CEOs as white men despite prompt adjustments.
-
•
Leaders failing to acknowledge AI’s risks risk organizational and reputational harm, as seen in stock impacts like Siemens vs. Nvidia.
-
•
Design ops leaders can use concrete examples of AI harm to build alliances and push for ethical practices across teams.
Notable Quotes
"AI tech debt has compounding interest to it — rushing to market can seriously harm your product and brand."
"Robert was arrested because an AI matched his driver's license photo to a burglary suspect, but it was a false positive."
"DataWorks Plus does not formally measure their system for accuracy or bias — that was the root of Robert's wrongful arrest."
"We’re the solution — people like you and me can ensure harmful AI mistakes don’t keep happening."
"As a party planner, your role is to ensure all the right people are invited to the AI development process."
"Machine learning engineers bring AI to life — they’re responsible for making it real."
"MidJourney’s AI showed white men consistently as CEOs and professors, revealing systemic bias in training data."
"Ethical stress testing subjects AI to hard hypothetical scenarios, like autonomous cars weighing risks between passengers and pedestrians."
"Your AI is learning from real-world input — sometimes from untrusted sources — so ethical iteration is essential."
"If you’re ensuring the right people are involved, asking the right questions, and focusing on ethics, you’re doing your part to prevent harms like Robert’s case."
Dig deeper—ask the Rosenbot:
















More Videos

"At first we considered outside research team access but decided to keep the panels limited within research for initial version."
Wyatt HaymanGlobal Research Panels (Videoconference)
August 8, 2020

"Leonardo helps generate color palettes that meet contrast requirements, ensuring accessible themes."
PJ Buddhari Nate BaldwinMeet Spectrum, Adobe’s Design System
June 9, 2021

"When we stick to just verbal descriptions, a lot of nuance about future experiences gets lost."
Sarah GallimoreInspire Progress with Artifacts from the Future
November 18, 2022

"UX directors often feel like this poor person in the middle here getting pulled in all these directions."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023

"Hiring local community members as researchers in crisis zones creates safer environments and authentic stories."
Dr. Jamika D. Burge Mansi GuptaAdvancing the Inclusion of Womxn in Research Practices (Videoconference)
September 15, 2022

"You have to understand how your company actually makes money."
Amy MarquezINVEST: Discussion
June 15, 2018

"Gaming is about generating positivity in ways software like Azure can’t."
Dane DeSutter Natalie Gedeon Deborah Hendersen Cheryl PlatzBeyond the Console: The rise of the Gamer Experience and how gaming will impact UX Research across industries (Videoconference)
May 17, 2024

"A kickback is a free flowing and life giving gathering that transforms and electrifies throughout the night."
Zariah CameronReDesigning Wellbeing for Equitable Care in the Workplace
September 23, 2024

"Ask what helps you work better—this simple question is one of the most important things for colleagues and managers."
Jessica NorrisADHD: A DesignOps Superpower
September 9, 2022