This video is featured in the AI and UX playlist.
Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
Rushing AI deployment creates tech debt that compounds faster and causes more brand damage than traditional software issues.
-
•
DataWorks Plus facial recognition software caused a wrongful felony arrest due to untested bias and accuracy problems.
-
•
Multidisciplinary teams including legal, UX, ML engineers, researchers, domain experts, and ethicists are essential for ethical AI development.
-
•
Ethical AI requires asking pointed questions about data origin, bias testing, mitigation, ongoing monitoring, and user feedback.
-
•
Prototyping AI behavior against varied user personas and scenarios helps identify bias and technical flaws early.
-
•
Ethical stress testing simulates difficult scenarios (e.g., autonomous vehicle ethics) to verify AI alignment with values.
-
•
AI systems continuously learn from user input and environment, so ethical iteration is needed to prevent degradation or bias amplification.
-
•
MidJourney’s AI image generation reflects data biases, repeatedly stereotyped CEOs as white men despite prompt adjustments.
-
•
Leaders failing to acknowledge AI’s risks risk organizational and reputational harm, as seen in stock impacts like Siemens vs. Nvidia.
-
•
Design ops leaders can use concrete examples of AI harm to build alliances and push for ethical practices across teams.
Notable Quotes
"AI tech debt has compounding interest to it — rushing to market can seriously harm your product and brand."
"Robert was arrested because an AI matched his driver's license photo to a burglary suspect, but it was a false positive."
"DataWorks Plus does not formally measure their system for accuracy or bias — that was the root of Robert's wrongful arrest."
"We’re the solution — people like you and me can ensure harmful AI mistakes don’t keep happening."
"As a party planner, your role is to ensure all the right people are invited to the AI development process."
"Machine learning engineers bring AI to life — they’re responsible for making it real."
"MidJourney’s AI showed white men consistently as CEOs and professors, revealing systemic bias in training data."
"Ethical stress testing subjects AI to hard hypothetical scenarios, like autonomous cars weighing risks between passengers and pedestrians."
"Your AI is learning from real-world input — sometimes from untrusted sources — so ethical iteration is essential."
"If you’re ensuring the right people are involved, asking the right questions, and focusing on ethics, you’re doing your part to prevent harms like Robert’s case."
Or choose a question:
More Videos
"High turnover means you’re rebooting the ecosystem repeatedly, preventing relationships from maturing."
This Game is Never Done: Design Leadership Techniques from the Video Game World
November 6, 2017
"Diagrams are one of the best life rafts for crossing the sea of volatility."
Abby CovertStuck? Diagrams Help (Videoconference)
October 27, 2022
"Sharing war stories helps build a culture where imperfect, unpredictable moments are valued and learned from."
Steve Portigal Susan Simon-Daniels Tamara Hale Randolph Duke IIWar Stories LIVE! Q&A-Discussion
March 30, 2020
"There’s no such thing as a self-organized community, in my opinion. They just don’t work."
Kara KaneCommunities of Practice for Civic Design (Videoconference)
April 7, 2022
"Executive buy-in is critical to success but remains a significant challenge for most in the industry."
Caroline VizeThe State of UX: Five Lessons from 2021 to Accelerate Digital Experience in 2022
March 9, 2022
"Design at scale is humanism scale."
Lada GorlenkoTheme 3: Introduction
June 10, 2021
"AI tech debt has compounding interest to it — rushing to market can seriously harm your product and brand."
Jay BustamanteNavigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation
October 2, 2023
"We created guardrails like research buddies and presentation feedback to avoid invalid research floating around."
Marjorie Stainback Kelsey KingmanTransforming Strategic Research Capacity through Democratization
October 24, 2019
"Secondary trauma can feel neurobiologically the same as if the traumatic event had actually happened to you."
Rachael Dietkus, LCSW Uday Gajendar Dr. Dawn Emerick Dawn E. Shedrick, LCSWLeading through the long tail of trauma (Videoconference)
July 13, 2022