Smart Automation Can Be Transparent

August 17th, 2020

Some people state that Machine Learning is a black-box and that its results can't be trusted because we can't understand them. To some degree they're correct. Although I'd ask, do you really understand how your car works? In this post we go through different ways to make sense out of common Machine Learning models, and how to decrease the lack of understanding for those really complex ones. It's a technical post that references some of our more high-level work on the damage that biased Machine Learning can have.


Pending

Comment to help prioritize content.

Learn more: "A User-Driven Knowledge Center"



AUTHOR
I help companies through digital transformations by designing and developing systems that improve human decision making with data, cloud, software, and mathematics (Machine Learning, Operations Research, Statistics). I also enjoy coaching teams and publishing content on Smart Automation.

Comments

Comments not loaded in development.

Recommended