Log in or create a free Rosenverse account to watch this video.
Log in Create free account100s of community videos are available to free members. Conference talks are generally available to Gold members.
Summary
Join us for a different type of Quant vs. Qual discussion: instead of discussing how data science and quantitative research methods can power UX research and design, we’re going to talk about designing enterprise data products and tools that put ML and analytics into the hands of users. Does this call for new, different, or modified approaches to UX research and design? Or do these technologies have nothing to do with how we approach design for data products? The session’s host will be Brian T. O’Neill who is also the host of the Experiencing Data podcast and founder of Designing for Analytics, an independent consultancy that helps data products leaders use design-driven innovation to deliver better ML and analytics user experiences. In this session, we’ll be sharing some rapid [slides-optional] anecdotes and stories from the attendees and then open up the conversation to everyone. We hope to get perspectives from both enterprise data teams doing “internal” data analytics or ML/AI solutions development as well as software/tech companies as well who may offer data-related platform tools, intelligence/SAAS products, BI/decision support solutions, etc. Slots are open to both experienced UX practitioners as well as data science / analytics / technical participants who may have participated in design or UX work with colleagues. Please share! If folks are too quiet in the session, you may be subject to a drum or tambourine solo from Brian. Nobody has all of this “figured out yet” and experiments and trials are welcome.
Key Insights
-
•
UX researchers working with ML and data science teams often lack domain expertise, requiring strong facilitation and interpretive interview skills.
-
•
Machine learning product users are usually small specialized expert groups, making statistical rigor and sampling difficult.
-
•
Trust and interpretability in ML models can be more important than peak accuracy for business adoption.
-
•
Collaborative shared spaces or cross-functional teams focused on common goals bridge gaps between design, engineering, and data science.
-
•
Designers help translate data science outputs into actionable, contextual decision support for end users.
-
•
Prototyping ML products requires believable, realistic data and testing boundaries like false positives early to build trust.
-
•
Decision culture (focusing on which decisions to support) is a more useful framing than data culture in enterprise AI/ML products.
-
•
The ecosystem around ML includes not only users and business stakeholders but also data labelers who impact outcomes.
-
•
The integration of business rules with ML models enhances contextual relevance and facilitates user trust.
-
•
No-code and rapid data science tools are emerging to help speed experimentation but do not fully replace model development demands.
Notable Quotes
"When you work with machine learning engineers doing advanced techniques, you’re really out of the realm of your knowledge."
"Most data science and analytics teams do not have designers or user experience people unless they are software native companies."
"If nobody uses this because they don’t trust it, it doesn’t matter. You just rehearsed without a concert."
"We need to think more about decision culture instead of data culture — what decisions are we trying to make?"
"Sometimes, the answer is ignore machine learning here. It is not the right tool for everything."
"Designers partnering with data scientists leads to smarter, more adaptable interfaces that actually get used."
"We’re talking to really small groups — sometimes 8 to 10 ML engineers on a particular domain — so sampling and rigor are tough."
"Creating that shared space where design, engineering, and data science work together is key to success."
"The human algorithms that people use today should be understood and incorporated into ML models where possible."
"You have to test the boundaries of false positives, false negatives, and surprising positives to see if people trust your model."
Or choose a question:
More Videos
"If you are silent about your pain, they will kill you and say you enjoyed it."
Zariah CameronReDesigning Wellbeing for Equitable Care in the Workplace
September 23, 2024
"Individuals tend to confuse prioritization with personal productivity or time management."
John Cutler Harry MaxPrioritization for designers and product managers (1st of 3 seminars) (Videoconference)
June 13, 2024
"Sense-making is a motivated, continuous effort to understand connections and act effectively."
Nick CochranGrowing in Enterprise Design through Making Connections
June 3, 2019
"Feedback is data; sometimes there are outliers, so it’s important to understand if feedback reflects the majority."
Deanna SmithLeading Change with Confidence: Strategies for Optimizing Your Process
September 23, 2024
"Meeting stakeholders where they are means understanding their language and prior knowledge, not just the physical location."
Magdalena ZadaraZero Hour: How to Get Far Quickly When Starting Your Digital Service Unit Late
November 16, 2022
"We didn’t exploit the knowledge gap. Everyone was willing to say I don’t know. Can you help me?"
Nova Wehman-BrownWe've Never Done This Before
June 4, 2019
"Artificial intelligence is about building human-like intelligence. This has opened up a golden opportunity for all of us to make a big impact."
Liwei DaiThe Heart and Brain of the AI Research
March 31, 2020
"By July 2020, more than 4,000 workers had arrived, 35 tested positive during quarantine but there were zero cases on BC farms employing temporary foreign workers."
Gordon Ross12 Months of COVID-19 Design and Digital Response with the British Columbia Government
December 8, 2021
"I was running on fumes for decades."
Tutti TaygerlyMake Space to Lead
June 12, 2021