In a review on Serious Play, Dan Bricklin talks about how he, one of the inventor of visicalc (a predecessor to Excel), likes the book because of certain aspects that shows that eventually innovation is always about "...the customer's perceived mean-time-to-payback, not the innovator's speed-to-market, effectively determines which innovations will dominate their markets." This is absolutely true for all the software I am using like Matlab (which allows you in two lines to draw a curve), Python, etc... In a way, the customer feels enpowered into doing something "useful" and makes the product a success story.
What is very funny about this is the other study done by Kotteman, Davis and Remus (check [116]) that shows that when a group of MBA students using what-if tools like Visicalc, they were likely to do worse if not equally well than a group who did not have a what-if tool.
" [116] Jeffrey E. Kotteman, Fred D. Davis, and William E. Remus, specialists in business decisions, tested the ability of a group of students to control a simulated production line under conditions of demand with a strong random element. They add more workers with the risk of idle time on one hand, or maintain a smaller workforce with the risk of higher overtime payments on the other. They could restrict output relative to demand, possibly losing sales, if orders jumped, or they could maintain higher output at the cost of maintaining possibly excessive inventory.
[117] The subjects, M.B.A. student volunteers and experienced spreadsheet users, were divided into two groups. One group was shown a screen that prompted only for the number of units to produce and workforce level. The other group could enter anticipated sales for each proposed set of choices and immediately run a simulation program to show the varying results for projected inventory. They could immediately see the costs of changing workforce levels, along with overtime, idle time, and non-optimal inventory costs, all neatly displayed within seconds after entering their data. They had no more real information than the first group, but a much more concrete idea of the consequences of every choice they made.
[118] If all we have read about the power of spreadsheets is right, the "what-if" group should have outperformed the one with more limited information, unable to experiment with a wide variety of data. Actually the what-if subjects incurred somewhat higher costs than those without the same analytical tools, although the difference between the two groups was, as expected, not statistically significant. The most interesting results had less to do with actual performance than with the subjects' confidence. The "non what-ifs" rated their own predictive ability fairly accurately. The rating that subjects in the what-if group assigned to their own performance had no significant relationship to actual results. The correlation was little better than if they had tossed coins.
[119] Even more striking was how the subjects in the what-if group thought of the effects of the decision-making tools. What-if analysis improved cost performance for only 58 percent of the subjects who used it, yet 87 percent of them thought it had helped them, five percent believed there was no difference, and only one percent thought that it had hurt. This last-mentioned subject was actually one of those it had helped. Another experiment had an even more disconcerting result: "decision-makers were indifferent between what-if analysis and a quantitative decision rule which, if used, would have led to tremendous cost savings." In other words, the subjects preferred what-if exploration to a proven technique."49"
The most interesting part of that study was how the "what-if" group felt about their job of predicting the outcome of specific process. Most were so enamored with the fact they could play with it before making a decision that their confidence level was much higher than the other control group who did not have access to the what-if tool. They even showed in the original article that the what-if group was doing worse with time (not better). It would seem to me that one of the reason the what-if group is doing worse has a lot to do with the fact that they haven't learned the process, in other words, while they spent more time that the non-what-if group, they did not spend enough time to provide a significant statistics that would yield higher payback. This is one of the reason that in thermal engineering we need tools that do better than just give a max or min value temperature for design purposes. CRTech - article on Advanced multidisciplinary methods for parametric, optimization and reliability methods in thermal design and fluid flow design - for instance has some built-in tools that allows one to do monte-carlo on the system parameters in order to provide an idea of how wide the estimates are. These conclusions are rather annoying, it says that you can build a product that everybody thinks is worthy because it saves time and money when in fact it doesn't, the successful invention only boosts your ego/confidence.....