This video is featured in the AI and UX playlist.
Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
Rushing AI deployment creates tech debt that compounds faster and causes more brand damage than traditional software issues.
-
•
DataWorks Plus facial recognition software caused a wrongful felony arrest due to untested bias and accuracy problems.
-
•
Multidisciplinary teams including legal, UX, ML engineers, researchers, domain experts, and ethicists are essential for ethical AI development.
-
•
Ethical AI requires asking pointed questions about data origin, bias testing, mitigation, ongoing monitoring, and user feedback.
-
•
Prototyping AI behavior against varied user personas and scenarios helps identify bias and technical flaws early.
-
•
Ethical stress testing simulates difficult scenarios (e.g., autonomous vehicle ethics) to verify AI alignment with values.
-
•
AI systems continuously learn from user input and environment, so ethical iteration is needed to prevent degradation or bias amplification.
-
•
MidJourney’s AI image generation reflects data biases, repeatedly stereotyped CEOs as white men despite prompt adjustments.
-
•
Leaders failing to acknowledge AI’s risks risk organizational and reputational harm, as seen in stock impacts like Siemens vs. Nvidia.
-
•
Design ops leaders can use concrete examples of AI harm to build alliances and push for ethical practices across teams.
Notable Quotes
"AI tech debt has compounding interest to it — rushing to market can seriously harm your product and brand."
"Robert was arrested because an AI matched his driver's license photo to a burglary suspect, but it was a false positive."
"DataWorks Plus does not formally measure their system for accuracy or bias — that was the root of Robert's wrongful arrest."
"We’re the solution — people like you and me can ensure harmful AI mistakes don’t keep happening."
"As a party planner, your role is to ensure all the right people are invited to the AI development process."
"Machine learning engineers bring AI to life — they’re responsible for making it real."
"MidJourney’s AI showed white men consistently as CEOs and professors, revealing systemic bias in training data."
"Ethical stress testing subjects AI to hard hypothetical scenarios, like autonomous cars weighing risks between passengers and pedestrians."
"Your AI is learning from real-world input — sometimes from untrusted sources — so ethical iteration is essential."
"If you’re ensuring the right people are involved, asking the right questions, and focusing on ethics, you’re doing your part to prevent harms like Robert’s case."
Or choose a question:
More Videos
"Restoration is not just about replanting trees; it’s about rebuilding entire ecosystems."
Alex Hurworth Bonnie John Fahd Arshad Antoine MarinDesigning a Contact Tracing App for Universal Access
October 23, 2020
"Sometimes doing working sessions together to strategize and solve problems can be really beneficial for these new practitioners."
Laine Riley Prokay Lisa GordonCarving a Path for Early Career DesignOps Practitioners
September 9, 2022
"If you come with a big idea, they’ll try to dial you back to the smallest iota you can test first."
Eniola OluwoleLessons From the DesignOps Journey of the World's Largest Travel Site
October 24, 2019
"Tell tight, brief stories of insights focused on impact, not on how you conducted your research."
Nathan ShedroffDouble Your Mileage: Use Your Research Strategically
March 31, 2020
"Accessibility is an ongoing process where we iterate, improve, and expand, and mobile-first makes the journey easier."
Sam ProulxMobile Accessibility: Why Moving Accessibility Beyond the Desktop is Critical in a Mobile-first World
November 17, 2022
"Rapid research is a flexible framework for quickly executing UX research for fast and often tactical or evaluative design decisions."
Feleesha SterlingBuilding a Rapid Research Program (Videoconference)
May 18, 2023
"Culturally resonant experiences can influence society in ways governments only dream of, with velocity and speed."
Neil BarrieWidening the Aperture: The Case for Taking a Broader Lens to the Dialogue between Products and Culture
March 25, 2024
"Leadership roles have to evolve depending on whether you’re building awareness, influence, capacity, or measuring impact."
John DevanneyThe Design Management Office
November 6, 2017
"We decided to jump off the wall — it was scary but the opportunity was too big to pass up."
Katy MogalBut Do Your Insights Scale?
March 12, 2021