Monday, January 08, 2018

Big Data and The Analytics Paradox

Kellogg School Professor Eric Anderson describes an interesting phenomenon that he calls the “Analytics Paradox.”  Anyone grappling with how their firm employs data science as part of their business strategy should become familiar with this concept.  Anderson explains:  

A young firm starts out making many mistakes. Eager to improve, they collect lots of data and build cool new models,” he says. “Over time, these models allow the young firm to find the best answers and implement these with great precision. The young firm becomes a mature firm that is great at analytics. Then one day the models stop working. Mistakes that fueled the models are now gone and the analytic models are starved."

Anderson offers an example of the analytics paradox.  Imagine that a firm provides two-day delivery services.    They consider using data analytics to help make an important decision: whether or not to offer one-day delivery services to customers.  You might be hard-pressed to answer that question using analytics.  Why? An effective organization becomes proficient at executing two-day delivery.  If the firm can't meet two-day delivery deadlines on a regular basis, processes and systems are changed.  Employees who can't meet the two-day delivery schedule get admonished or even dismissed.  In short, a firm that is very effective at execution will drive all variability out of its "production" system.  Yet, without variability, you will find it very difficult to use analytics to drive improved decision making. 

What can you do to conquer the analytics paradox?    Put simply, you need to inject variability into your system by promoting thoughtful and systematic experimentation.   The use of experiments enables you to test different models and systems, and in so doing, generate the type of data that can be analyzed effectively to enhance decision making.  

No comments: