This video is featured in the AI and UX playlist.
Summary
In the very realistic future of an AI-driven world, the responsible and ethical implementation of technology is paramount. In this session, we will dive into the crucial role of DesignOps practitioners in driving ethical AI practices. We'll tackle the challenge of ensuring AI systems align with user values, respect privacy, and avoid biases, while unleashing their potential for innovation. As a UX strategist and DesignOps practitioner, I understand the significance of integrating ethical considerations into AI development. I bring a unique perspective on how DesignOps can shape the future of AI by fostering responsible innovation. This session challenges the status quo by highlighting the intersection of DesignOps and ethics, advancing the conversation in our field and sparking thought-provoking discussions. Attendees will gain valuable insights into the role of DesignOps in navigating the ethical landscape of AI. They will learn practical strategies and best practices for integrating ethical frameworks into their AI development processes. By exploring real-world examples and case studies, attendees will be inspired to push the boundaries of responsible AI and make a positive impact in their organizations. Join me in this exciting session to chart the course for ethical AI, challenge conventional thinking, and explore the immense potential of DesignOps in driving responsible innovation.
Key Insights
-
•
Rushing AI deployment creates tech debt that compounds faster and causes more brand damage than traditional software issues.
-
•
DataWorks Plus facial recognition software caused a wrongful felony arrest due to untested bias and accuracy problems.
-
•
Multidisciplinary teams including legal, UX, ML engineers, researchers, domain experts, and ethicists are essential for ethical AI development.
-
•
Ethical AI requires asking pointed questions about data origin, bias testing, mitigation, ongoing monitoring, and user feedback.
-
•
Prototyping AI behavior against varied user personas and scenarios helps identify bias and technical flaws early.
-
•
Ethical stress testing simulates difficult scenarios (e.g., autonomous vehicle ethics) to verify AI alignment with values.
-
•
AI systems continuously learn from user input and environment, so ethical iteration is needed to prevent degradation or bias amplification.
-
•
MidJourney’s AI image generation reflects data biases, repeatedly stereotyped CEOs as white men despite prompt adjustments.
-
•
Leaders failing to acknowledge AI’s risks risk organizational and reputational harm, as seen in stock impacts like Siemens vs. Nvidia.
-
•
Design ops leaders can use concrete examples of AI harm to build alliances and push for ethical practices across teams.
Notable Quotes
"AI tech debt has compounding interest to it — rushing to market can seriously harm your product and brand."
"Robert was arrested because an AI matched his driver's license photo to a burglary suspect, but it was a false positive."
"DataWorks Plus does not formally measure their system for accuracy or bias — that was the root of Robert's wrongful arrest."
"We’re the solution — people like you and me can ensure harmful AI mistakes don’t keep happening."
"As a party planner, your role is to ensure all the right people are invited to the AI development process."
"Machine learning engineers bring AI to life — they’re responsible for making it real."
"MidJourney’s AI showed white men consistently as CEOs and professors, revealing systemic bias in training data."
"Ethical stress testing subjects AI to hard hypothetical scenarios, like autonomous cars weighing risks between passengers and pedestrians."
"Your AI is learning from real-world input — sometimes from untrusted sources — so ethical iteration is essential."
"If you’re ensuring the right people are involved, asking the right questions, and focusing on ethics, you’re doing your part to prevent harms like Robert’s case."
Or choose a question:
More Videos
"Can you show me your process, not just your portfolio? That shows me your real design thinking."
Adam Cutler Karen Pascoe Ian Swinson Susan WorthmanDiscussion
June 8, 2016
"If you’re doing a lot of work that’s not in your job description, you might actually be doing leadership."
Peter MerholzThe Trials and Tribulations of Directors of UX (Videoconference)
July 13, 2023
"If you don’t tune it properly, sometimes you just don’t get what you want."
Lisa WelchmanCleaning Up Our Mess: Digital Governance for Designers
June 14, 2018
"The time for action is now, and it must be collaborative."
Vincent BrathwaiteOpener: Past, Present, and Future—Closing the Racial Divide in Design Teams
October 22, 2020
"Growth and learning is your long term change management plan."
Brenna FallonLearning Over Outcomes
October 24, 2019
"It’s human infrastructure—community organizing, unions, activists—that saves the day when other infrastructures break down."
Tricia WangSpatial Collapse: Designing for Emergent Culture
January 8, 2024
"Proto personas created by cross-department participants helped us build unbiased, relevant survey questions."
Edgar Anzaldua MorenoUsing Research to Determine Unique Value Proposition
March 11, 2021
"Our brains are terrible at operating only on one type of information, whether object-oriented or context-oriented."
Designing Systems at Scale
November 7, 2018
"A lot of developers are way too confident they write perfect code; testing bug fixes often reveals hidden issues."
Erin WeigelGet Your Whole Team Testing to Design for Impact
July 24, 2024
Latest Books All books
Dig deeper with the Rosenbot
How might subscription services be redesigned using thinking styles to better address user financial and emotional concerns?
How can teams identify the most impactful moments to intervene in user security behavior?
In what ways do sculptor AI interfaces differ from traditional tool-based interactions?