082 - What the 2021 $1M Squirrel AI Award Winner Wants You To Know About Designing Interpretable Machine Learning Solutions w/ Cynthia Rudin

Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management) - Un pódcast de Brian T. O’Neill from Designing for Analytics - Martes

Categorías:

Episode Description As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity. In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren’t necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems. In this episode, we cover: Background on the Squirrel AI Award – and Cynthia unpacks the differences between Explainable and Interpretable ML. (00:46) Using real-world examples, Cynthia demonstrates why black boxes should be replaced. (04:49) Cynthia’s work on the New York City power grid project, exploding manhole covers, and why it was the messiest dataset she had ever seen. (08:20) A look at the future of machine learning and the value of human interaction as it moves into the next frontier. (15:52) Cynthia’s thoughts on collecting end-user feedback and keeping humans in the loop. (21:46) The current problems Cynthia and her team are exploring—the Roshomon Set, optimal sparse decision trees, sparse linear models, causal inference, and more. (32:33) Quotes from Today’s Episode “I’ve been trying to help humanity my whole life with AI, right? But it’s not something I tried to earn because there was no award like this in the field while I was trying to do all of this work. But I was just totally amazed, and honored, and humbled that they chose me.”- Cynthia Rudin on receiving the AAAI Squirrel AI Award. (@cynthiarudin) (1:03) “Instead of trying to replace the black boxes with inherently interpretable models, they were just trying to explain the black box. And when you do this, there's a whole slew of problems with it. First of all, the explanations are not very accurate—they often mislead you. Then you also have problems where the explanation methods are giving more authority to the black box, rather than telling you to replace them.”- Cynthia Rudin (@cynthiarudin) (03:25) “Accuracy at all costs assumes that you have a static dataset and you’re just trying to get as high accuracy as you can on that dataset. [...] But that is not the way we do data science. In data science, if you look at a standard knowledge discovery process, [...] after you run your machine learning technique, you’re supposed to interpret the results and use that information to go back and edit your data and your evaluation metric. And you update your whole process and your whole pipeline based on what you learned. So when people say things like, ‘Accuracy at all costs,’ I’m like, ‘Okay. Well, if you want accuracy for your whole pipeline, maybe you would actually be better off designing a model you can understand.’”- Cynthia Rudin (@cynthiarudin) (11:31) “When people talk about the accuracy-interpretability trade-off, it just makes no sense to me because it’s like, no, it’s actually reversed, right? If you can actually understand what this model is doing, you can troubleshoot it better, and you can get overall better accuracy.“- Cynthia Rudin (@cynthiarudin) (13:59) “Humans and machines obviously do very different things, right? Hum

Visit the podcast's native language site