Strategic Management Tools: Don’t Peanut Butter
A wonderful little gem of “free” management data exists in the Bain & Company biannual Management Tools and Trends survey of global corporate executives. The Bain folks have been running this since 1993, yielding quite a nice time trend history of management tool stalwarts … and fads.
The survey assesses executive “usage of”, and “satisfaction with”, various tools. Yes, these are squishy measures, and no substitute for bottom line performance, but I fall into the camp that believes good executives have a decent intuitive and integrative view of whether tools are helping them, or hindering them, in doing their jobs, so I see these data as valuable in their own way.
Each publication year (last one was 2009, it’s due for an update), the Bain folks provide the ranked lists of tools. Evergreens such as strategic planning and mission and vision statements always hover near the top. Reactionary tools such as scenario planning follow expected behaviors (peaking after 9/11 and the internet bubble burst, then declining, and rising again in 2008, possibly in response to early indications of economic downturn).
One of the most interesting features of the dataset, however, often gets short shrift. Buried in the fine print, the Bain folks sometimes segregate their satisfaction results based on companies that engage in “limited” implementations, vs “major” implementations, of various tools. And here’s the rub: the gap between self-reported satisfaction scores between limited vs major implementation far exceeds the gap between the top and bottom ranked overall:
In short, it matters less which tool you pick, than that you pick a tool and do it right. I say “pick” because the list of available tools (fads) is too long for any company to probably devote enough institutional energy to cover the waterfront, at least not at the level of “C-suite attention”.
The data also repeatedly show a low usage rate of “major” implementations with higher than average satisfaction, and a high usage rate of “low” implementations with lower than average satisfaction. In short, the average satisfaction is likely strongly driven by the “fad” adopters, the larger number of companies that dabble in implementation and are less than overwhelmed by the results. (Bain calls the quadrant of higher than average usage, lower than average satisfaction, blunt instruments, which reasonably captures the idea.)
In the next post, I’ll share a “Rotten Tomatoes” type analysis of fads and trends, a mashup of the Bain data with Google Books and Insights trends.