Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Join us for a different type of Quant vs. Qual discussion: instead of discussing how data science and quantitative research methods can power UX research and design, we’re going to talk about designing enterprise data products and tools that put ML and analytics into the hands of users. Does this call for new, different, or modified approaches to UX research and design? Or do these technologies have nothing to do with how we approach design for data products? The session’s host will be Brian T. O’Neill who is also the host of the Experiencing Data podcast and founder of Designing for Analytics, an independent consultancy that helps data products leaders use design-driven innovation to deliver better ML and analytics user experiences. In this session, we’ll be sharing some rapid [slides-optional] anecdotes and stories from the attendees and then open up the conversation to everyone. We hope to get perspectives from both enterprise data teams doing “internal” data analytics or ML/AI solutions development as well as software/tech companies as well who may offer data-related platform tools, intelligence/SAAS products, BI/decision support solutions, etc. Slots are open to both experienced UX practitioners as well as data science / analytics / technical participants who may have participated in design or UX work with colleagues. Please share! If folks are too quiet in the session, you may be subject to a drum or tambourine solo from Brian. Nobody has all of this “figured out yet” and experiments and trials are welcome.
Key Insights
-
•
UX researchers working with ML and data science teams often lack domain expertise, requiring strong facilitation and interpretive interview skills.
-
•
Machine learning product users are usually small specialized expert groups, making statistical rigor and sampling difficult.
-
•
Trust and interpretability in ML models can be more important than peak accuracy for business adoption.
-
•
Collaborative shared spaces or cross-functional teams focused on common goals bridge gaps between design, engineering, and data science.
-
•
Designers help translate data science outputs into actionable, contextual decision support for end users.
-
•
Prototyping ML products requires believable, realistic data and testing boundaries like false positives early to build trust.
-
•
Decision culture (focusing on which decisions to support) is a more useful framing than data culture in enterprise AI/ML products.
-
•
The ecosystem around ML includes not only users and business stakeholders but also data labelers who impact outcomes.
-
•
The integration of business rules with ML models enhances contextual relevance and facilitates user trust.
-
•
No-code and rapid data science tools are emerging to help speed experimentation but do not fully replace model development demands.
Notable Quotes
"When you work with machine learning engineers doing advanced techniques, you’re really out of the realm of your knowledge."
"Most data science and analytics teams do not have designers or user experience people unless they are software native companies."
"If nobody uses this because they don’t trust it, it doesn’t matter. You just rehearsed without a concert."
"We need to think more about decision culture instead of data culture — what decisions are we trying to make?"
"Sometimes, the answer is ignore machine learning here. It is not the right tool for everything."
"Designers partnering with data scientists leads to smarter, more adaptable interfaces that actually get used."
"We’re talking to really small groups — sometimes 8 to 10 ML engineers on a particular domain — so sampling and rigor are tough."
"Creating that shared space where design, engineering, and data science work together is key to success."
"The human algorithms that people use today should be understood and incorporated into ML models where possible."
"You have to test the boundaries of false positives, false negatives, and surprising positives to see if people trust your model."
Or choose a question:
More Videos
"If design didn’t move the needle, we wouldn’t be having this conversation."
Standardizing Product Merits for Leaders, Designers, and Everyone
June 15, 2018
"Building a map is like learning to play chess—you have to see the board to decide your move."
Simon WardleyMaps and Topographical Intelligence (Videoconference)
January 31, 2019
"Many smartwatch designs assume male hands as default, excluding smaller wrists and diverse body types."
Sandra CamachoCreating More Bias-Proof Designs
January 22, 2025
"Reviewing decision points creates a source of truth and makes it harder to forget or scrap earlier agreements."
Darian DavisLessons from a Toxic Work Relationship
January 8, 2024
"Having conversations with stakeholders before sharing AI-generated recommendations validates their legitimacy and relevance."
Fisayo Osilaja[Demo] The AI edge: From researcher to strategist
June 4, 2024
"Working with CEOs like Mark Templeton is like Dancing with the Stars — an interpretive dance of translating fuzzy ideas."
Uday GajendarThe Wicked Craft of Enterprise UX
May 13, 2015
"No official kickoff was held, which meant shifting expectations and lack of alignment."
Davis Neable Guy SegalHow to Drive a Design Project When you Don’t Have a Design Team
June 10, 2021
"TripAdvisor is really nine major business units each thinking like sub companies instead of one end-to-end user experience."
Eniola OluwoleLessons From the DesignOps Journey of the World's Largest Travel Site
October 24, 2019
"Our two-way Jira integration allows you to link design activities directly with epics, user stories, and tasks."
Aurobinda Pradhan Shashank DeshpandeIntroduction to Collaborative DesignOps using Cubyts
September 9, 2022