Review: “The Failure of Risk Management”, by Douglas Hubbard
My rating: 1 of 5 stars
I had high expectations for this book after reading “How to Measure Anything”, and unfortunately none of them were met. My very short review would state: were it not for those high expectations, I would have stopped reading the book about 1/3 of the way in, but based on past performance, I stuck it through to the end. That was a mistake.
The defects in Hubbard’s second book are many. First and foremost, it is simply not pleasant to read. While “How to Measure” adopted a posture of helpful tutorial, “Failure” attempts to rehash most of the same material, albeit from a posture of criticizing almost every risk analysis method Hubbard has not personally worked on. The tone is shrill, smug, and “low emotional intelligence quotient”. In the book we are treated to several “I won’t name names but you know who you are” diatribes, a personal critique of author Nicholas Taleb for being too abrasive in delivery (which he is … but Hubbard delivers this assessment with apparently no hint of irony), and ever more stories of how Hubbard publicly shames clients during working meetings into admitting they do not know as much as he does. If that is one’s corporate approach towards change management, it would seem Hubbard is your man. Ironically, all of these things suggest a sensibility towards the actual “people systems” of not just management, but implementation, which is completely lacking – and thus undermines Hubbard’s credibility as an expert on anything other than analytic techniques. This may be an unfair personal assessment, but Hubbard does little in the book to communicate even rudimentary management sensibilities, and the burden of proof – especially when exploring a topic such as this – should be his.
Hubbard spends an inordinate portion of the book repeatedly – redundantly – making the same self-evident point that low-fidelity risk analysis methods such as scoring approaches are, well, low-fidelity, and subject to bias. This is tautological. Even for those consumers of the methods who haven’t thought hard about the issue, the point can be made in five pages, and does not need 150. (Note that it is at least that long before solutions begin to be offered). Even worse, Hubbard’s primary critique other than offending “first principles” sensibilities is that these techniques have not been proven to actually have measurable impacts on performance. This might be an interesting line of inquiry had Hubbard actually done any new research on the subject, or joined with management consultants who had. Or, more importantly, had demonstrated the benefits of using the more rigorous, probabilistic risk assessment techniques which he advocates. He does not. (He alludes to this in literally the closing chapters of the book, but never actually tackles the challenge of performance-based assessment. Simple techniques are bad because they are not as rigorous or unbiased as the techniques he would advocate – therefore they must (or perhaps may?) do more harm than good. Difficult to say, as this issue is delivered rhetorically rather than rigorously.
The biggest failure of “The Failure of Risk Management” is that it mostly declines to tackle actual management. As Hubbard himself seems to realize and admit very late in the book, he has written a text about risk analysis, not risk management. Ultimately, the content – even if it were not largely a rehash of the material from “How to Measure” – is much, much, much thinner than the title, and the title could have been a very interesting exploration of modern (or not so modern) management techniques. A further challenge is that from Hubbard’s anecdotes, it appears he views even risk management (read: analysis) as something done solely for the purpose of decision support for senior executives. No mention is made of risk management as a tool for allowing not just C-suite executives, but project managers but the employees who actually have to manage and mitigate risks. This elision allows Hubbard to even more stridently dismiss all low-fidelity techniques out of hand. (Make no mistake – scoring approaches and their efficacy do need hard scrutiny. Unfortunately, Hubbard does not provide it, he simply shouts for others to perform it.)
In summary – if you have read “How to Measure Anything”, you have read 90% of what Hubbard has to say, and probably enjoyed reading it more than you will by engaging in this book. If you have an agenda to promote probabilistic risk management within your organization, to the detriment of other approaches, this book will provide you ample rhetoric, as well as theory, but not actual evidence, or ROI documentation, and very little in the way of tangible implementation tools or techniques to go forward. It is an opportunity missed.