Share

Black swans, fat tails and risk – So what?

292 Views
Started by David Griffiths on
21 Feb 2013 at 13:24

Black Swan

I was recently reading a blog by Nassim Taleb, where he decried the lack of understanding surrounding his book, ‘Black Swan’ when it was originally launched in 2007 – apparently too many people thinking it was about the prediction of outliers (black swans).  I’m not a statistician, I certainly wouldn’t be called a quants man, but I have been accused of obsessing over the impact of the unpredictable on KM systems (especially soft systems).  As you will have seen from my posts over the past two years, I believe in human agency and people are inherently unpredictable (we just don’t know enough about the whole person to be able to predict their behaviour with 100% certainty – irritating when you are trying to predict ROI on a CoP implementation strategy), which means that we are building systems to interact with a highly unstable element.

Anyway, back to unnatural swans.  Shortly after reading Taleb in 2007 I started to hear more about ‘fat tails’ – my wife initially thinking this was some sort of code to enable men to pass derogatory comment on the female derriere, but I digress.   So, what about these ‘fat tails’ and how concerned should I be about them?

I’ve attended a number of conference ‘lectures’ over the last four years, where the assumption has more often than not been that everyone in the room is a quants person and can appreciate the pontificating of the psuedo keynote/lecturer (University lecturers who really want to be high-brow consultants are the worst – looks at himself between 2008 and 2012 and worries) – For the best explanation, see Dave Snowden’s videos on the topic.  We then disperse for the break and you can tell the non-quants, we are the ones looking unsure of ourselves, masking our uncertainty by extolling the virtues of power laws and Pareto over the Gaussian distribution (we may not always ‘get it’, but boy we can regurgitate it).  I was at one conference in Hungary where I had enough and asked a group of people who were happily drinking coffee, patting themselves on the back over their ‘regurgitating’ prowess, “yes, but so what?”  People suddenly had to refresh their coffee before the next session… hmmm…had I just been ‘found out’ to be an intellectual light weight or was it them that had been ‘found out’?!?

The basic premise is that black swan events (the high impact, low probability events) are happening more often than we once realised (emphasised by a change in observational lens).  Risk alert! Okay, you have my attention – my eyes are darting around the room and I am looking for the nearest exits.  People are talking about Gaussian Distribution versus Zipf versus Pareto; the warning being that Pareto produces a ‘fat tail’ that demonstrate that high impact, low probability events are happening more often.  My initial reaction, remember, I am not a quants person, was that if the events are happening more often then surely that increases the probability of them happening and in which case, they aren’t so low probability after all…I also considered that perhaps we are just more connected and therefore data availability has shifted the perception of occurrence frequency.

Then there is the ‘high impact’ factor, that really had my attention.  I started transferring findings on earthquakes to the business setting – a popular ploy, well crafted by some who attempt to create the illusion of certainty (a mistake from the outset), but talking about probability (always a bad idea to mix risk with emotion, as happens when you talk in the realms of probability over certainty).  I was also found myself wondering about the relevance of the lens (Pareto versus Gaussian distribution).

Here is the problem as I see it.  First, the idea of the lens (Gaussian versus Pareto), so what?  There is serious debate as to the validity of Power Laws in complex environments; for example a strong argument is presented by Stumpf (Critical truths about power laws, 2012), claiming that power laws arise from infinite systems and real systems are finite.  That said, the same author does go on to say that it is not “knowledge of whether or not a distribution is heavy tailed is far more important than whether it can be fit using a power law”.  He also says, “The fact that heavily-tailed distributions occur in complex systems is certainly important (because it implies that extreme events occur more frequently that otherwise would be the case).  So, power laws might not be valid in complex domains, but their fat tails are important.  Don’t you just love how downright dizzying intellectual debate can be at times.

This isn’t very satisfying. What does this fat tail really mean?  What is the real risk?  Tell me that and I can start to deal with the problem. I need to know detail.  I want to know the proximate cause, but, more importantly, I want to understand the underlying conditions that contribute to it.  Sure, I want to know that a risk exists, but I want data to make sense at the same time.  I want to relate it to my world, not earthquakes (we apparently have 2-300 in the UK per year, but none have ever bothered me – there was one when I was living in Iceland once, but that’s another story). I want to relate it to organisations and the real world they transact in.

For me there are three things I want to know when it comes to low (but increasing) probability, high impact events – From the following it could be said that the first two are conditional (and could add noise to judgement), whereas the third could lead to a more considered conclusion:

1.  Relative risk (what is the risk of occurrence in my sector versus any other sector?)

2. Absolute risk (what is the risk of occurrence over a period of time?)

3. Natural frequency (risk communicated by frequency, e.g. 1 in 1,000)

There you have it.  This is what I want to know when I talk about risk to organisations (with particular emphasis on natural frequency).

The problem, we can talk about historic data from past events, but implications and impact will not be transferable (we can invest hundreds of thousands on Lessons Learned Information Systems and, in terms of these type of high impact/low probability events, we either don’t capture what we can really transfer in the short-term and the big lessons, well, we don’t know if they have been ‘learned’ until the next big event occurs – at which time the LLIS will be archived and nobody will know how to access it).

We can talk about frequency, but we cannot predict the next event or a likely time-frame for that event.  We can raise awareness, but we offer nothing ‘tangible’.  We play on emotion and, as I have said before, when we play with emotion we can often make unsound decisions.

The bottom line, from a non-quants person, there is something distinctly unsatisfying about the fat tail.  I agree with Stumpf, in that it certainly seems important, especially in considering complex systems and developing resilience, but, much like the restaurant that your friends have all raved about, raising your expectations to unreasonable highs (Lescargot Bleu in Edinburgh, that means you – twice!), it is just a little unsatisfying.  Then there is Dave Snowden’s concept of Probable – Possible – Plausible and the world suddenly starts to make more sense

http://youtu.be/2Hhu0ihG3kY

Support
<p>Get in touch...</p>
Feedback

CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.