Book review notes from “Surviving and thriving in uncertainty: Creating the risk intelligent enterprise” by Frederick Funston and Steven Wagner.
Funston, F., & Wagner, S. (2010). Surviving and thriving in uncertainty: creating the risk intelligent enterprise. Jphn Wiley & Sons, Hoboken, NJ.
I was led to this book during our environmental scanning for higher education. It has a lot of insights that I want to summarize here and share because I think they are broadly applicable to all of our interests. A lot of the readings in this course are concerned with creating grand visions of excellent futures and many are somewhat less helpful when it comes to implementing a strategy to get to those visions of future. Regardless of the vision you’re striving for, one thing can be certain: you will experience risk and reward opportunities all along the way and be constantly faced with making judgments about which strategies to resource and which to terminate. One of the consistent failures of strategic planners is a failure to consider implementation and management along the way. That’s why this book on developing risk intelligence personally and within an organization is both timely and necessary.
The book comes in three parts. Part one describes the consequences of failure to conduct effective risk management. Part two describes 10 essential risk intelligence skills. Part three describes how to design risk intelligence into your organization.
I’m when you concentrate on part two: the 10 essential risk intelligence skills. The book devotes a full chapter to each of these skills beginning with the consequences of not having the skill and therefore why it’s important.
Briefly:
- identify your assumptions explicitly
- maintain constant vigilance on the risk boundary
- factor in velocity and momentum
- manage key connections carefully
- anticipate causes of failure
- verify sources and corroborate information
- maintain a margin of safety
- establish your time horizon
- take enough of the right kinds of risks
- develop and sustain operational discipline
and now for a short discussion of these:
- identify your assumptions explicitly: trends, themes, forecasts change with such regularity that assumptions have a shelf-life. I recommend the discipline of assumptions based planning which examines a plan before it is implemented to identify critical assumptions and their effect on maintaining the cognitive structure of the plan. It’s called identifying loadbearing members and is something that we use in military planning with great effect. Examining the assumptions includes stating why it’s necessary to make and what the justification for the assumption is. Should identify what it would take to prove or disprove the assumption. We should consider the cost of turning the assumption and the fact. We also need to know the last time the information is of value in order to program our intelligence gathering assets properly. It may be that the cost of validating the assumption is greater than the potential reward for knowing the fact and we’re better off just making our best assumption and moving on.
- maintain constant vigilance on the risk boundary: this idea resonates with the discipline of high reliability organizations (HRO), which recognizes that no plan can survive contact with reality and therefore we should position sensors at the boundaries of our plan to find out when we’re beginning to go off plan. This is the equivalent of an early warning system for adaptation.
- factor in velocity and momentum: this acknowledges that trends and momentum develop in human engagements and that they can shift an environment and its outcomes considerably in one direction once they begin. If you look at megatrends, there comes a time when they take on a life of their own and they dramatically shift our orientation. It may be a low probability of events at the time of planning but we must account for this phenomenon in social arenas.
- manage key connections carefully: this is a central idea of network management and net centric learning. By identifying the critical networks and the critical nodes within those networks we can find the most important intersections to focus our management and leadership attention. We can’t manage every note as if they are equal or we will neglect the ones that contribute the most to overall network performance.
- anticipate causes of failure: this is a common practice in manufacturing and automobile engineering but it can also be found in military strategic and tactical planning as well. We often develop plans based on our most optimistic estimates of what can happen because we want to reach for the brightest possible future. Identifying all the things that can go wrong and then designing a set of responses that are robust and effective will go a long way towards improving our overall success rate. In the military we call this wargaming. One of my graduate students and I are developing some ideas about peace gaming which is the same idea but applied to stability in nationbuilding operations. The central idea is the same. Just as we need creativity in the initial inspiration for a new plan we also need to have creativity and identifying the potential fault lines.
- verify sources and corroborate information: in special forces operations, when we’re trying to assess the quality of information we assign it to different measures: reliability of the source and the probability that it’s true. When you rank each of these on a five-point scale you begin to get an appreciation of how vulnerable we may be too low quality or incomplete information. This is a concern for mammals with pattern matching brains which will tend to use confirmation bias to give extra weight to things that confirm our beliefs. The process of fact checking and crosschecking its robustness to her strategies.
- maintain a margin of safety: I seen this one a lot especially in my financial management business when we are studying investment or trading opportunities and we have gathered all the facts that we can and we decide how much insurance we need to take out on our position for those variables that are beyond our awareness and or control. Calculating margins of safety that are effective is one of the most artful and important judgments leaders can make. Just within the financial markets we see a seasonality to risk in which some cases a 10% protective stop is sufficient to guard our capital against extreme events, whereas at other times you need 50%. Risk and volatility in the markets changes like the weather and in broader business climates it is the same.
- establish your time horizon: this is what good engineers do when they’re designing solutions that must be able to perform within standards for a set period of time. Sometimes we think are solutions are going to be timeless and then we get surprised when they begin to fall apart through time. Establishing your talk time horizon really describes how long your particular solution or strategy must be good for and it helps you focus on the kinds of threats and opportunities that will arise inside of that timeframe.
- take enough of the right kinds of risks: in the financial markets we say that the only sure bet in the market is that if you don’t play you’re going to lose. If you leave your money under your mattress you can bet that it will soon become worthless based on the grinding losses of inflation. The only way to stay ahead of inflation is to participate in growth, which comes with a set of potential risks based on your implementation strategy. The right kinds of risks are the ones that can be effectively described, measured, managed and accounted for and which offer opportunities for greater rewards on an expected basis. The right kinds of risks can be thought of as businessman risks rather than gambling. I find this risk management skill to be the absolutely essential skill for equity traders in the capital markets. With it, everything else can be trained; without it it’s only a matter of time until the trader explodes.
- develop and sustain operational discipline: this scale is concerned with turning these habits of mind into automatic behaviors and routine parts of every operation. It makes a nice segue to part three of the book which describes how risk intelligence might manifest in the typical organization. Checklists, procedures, failsafes, two man rule, redundancy by design, inspection programs, anticipatory maintenance: all of these things go into creating a culture of discipline risk intelligence.
This is an excellent book that incorporates a lot of good thinking about risk and reward that is applicable to a wide range of disciplines.
Related articles
- Facts and Information (enviroriskman.com)
- Software development: dealing with awful estimates (programmers.stackexchange.com)
- Different Approaches to Project Risk Management (brighthub.com)
- Environmental Tectonics Corporation’s Simulation Division to Display Latest Emergency and Security Training Simulations at IDEX in Abu Dhabi (prnewswire.com)
- Project Risk Management (vasilestoica.wordpress.com)
- Risk Management Basics (vasilestoica.wordpress.com)
- What’s The Chatter? Using Social Networks For Managing Suppliers (blogs.forbes.com)
- Experimentation + Risk (+ Failure) = Improved Environment for Innovation (customerthink.com)
- Protiviti provides sound insights into risk management failures (normanmarks.wordpress.com)
- Some thoughts on “Mental model risk” (nicholasjdavis.wordpress.com)