The need for collaboration between teams using the same data in different ways

The Data Warehousing Institute

This article is based on conversations that took place recently on the TDWI LinkedIn Group [1].

The title of the discussion thread posted was “Business Intelligence vs. Business Analytics: What’s the Difference?” and the original poster was Jon Dohner from Information Builders. To me the thread topic is something of an old chestnut and takes me back to the heady days of early 2009. Back then, Big Data was maybe a lot more than just a twinkle in Doug Cutting and Mike Cafarella‘s eyes, but it had yet to rise to its current level of media ubiquity.

Nostalgia is not going to be enough for me to start quoting from my various articles of the time [2] and neither am I going to comment on the pros and cons of Information Builders’ toolset. Instead I am more interested in a different turn that discussions took based on some comments posted by Peter Birksmith of Insurance Australia Group.

Peter talked about two streams of work being carried out on the same source data. These are Business Intelligence (BI) and Information Analytics (IA). I’ll let Peter explain more himself:

BI only produces reports based on data sources that have been transformed to the requirements of the Business and loaded into a presentation layer. These reports present KPI’s and Business Metrics as well as paper-centric layouts for consumption. Analysis is done via Cubes and DQ although this analysis is being replaced by IA.


IA does not produce a traditional report in the BI sense, rather, the reporting is on Trends and predictions based on raw data from the source. The idea in IA is to acquire all data in its raw form and then analysis this data to build the foundation KPI and Metrics but are not the actual Business Metrics (If that makes sense). This information is then passed back to BI to transform and generate the KPI Business report.

I was interested in the dual streams that Peter referred to and, given that I have some experience of insurance organisations and how they work, penned the following reply [3]:

Hi Peter,

I think you are suggesting an organisational and technology framework where the source data bifurcates and goes through two parallel processes and two different “departments”. On one side, there is a more traditional, structured, controlled and rules-based transformation; probably as the result of collaborative efforts of a number of people, maybe majoring on the technical side – let’s call it ETL World. On the other a more fluid, analytical (in the original sense – the adjective is much misused) and less controlled (NB I’m not necessarily using this term pejoratively) transformation; probably with greater emphasis on the skills and insights of individuals (though probably as part of a team) who have specific business knowledge and who are familiar with statistical techniques pertinent to the domain – let’s call this ~ETL World, just to be clear :-).

You seem to be talking about the two of these streams constructively interfering with each other (I have been thinking about X-ray Crystallography recently). So insights and transformations (maybe down to either pseudo-code or even code) from ~ETL World influence and may be adopted wholesale by ETL World.

I would equally assume that, if ETL World‘s denizens are any good at their job, structures, datasets and master data which they create (perhaps early in the process before things get multidimensional) may make work more productive for the ~ETLers. So it should be a collaborative exercise with both groups focused on the same goal of adding value to the organisation.

If I have this right (an assumption I realise) then it all seems very familiar. Given we both have Insurance experience, this sounds like how a good information-focused IT team would interact with Actuarial or Exposure teams. When I have built successful information architectures in insurance, in parallel with delivering robust, reconciled, easy-to-use information to staff in all departments and all levels, I have also created, maintained and extended databases for the use of these more statistically-focused staff (the ~ETLers).

These databases, which tend to be based on raw data have become more useful as structures from the main IT stream (ETL World) have been applied to these detailed repositories. This might include joining key tables so that analysts don’t have to repeat this themselves every time, doing some basic data cleansing, or standardising business entities so that different data can be more easily combined. You are of course right that insights from ~ETL World often influence the direction of ETL World as well. Indeed often such insights will need to move to ETL World (and be produced regularly and in a manner consistent with existing information) before they get deployed to the wider field.

Now where did I put that hairbrush?

It is sort of like a research team and a development team, but where both “sides” do research and both do development, but in complementary areas (reminiscent of a pair of entangled electrons in a singlet state, each of whose spin is both up and down until they resolve into one up and one down in specific circumstances – sorry again I did say “no more science analogies”). Of course, once more, this only works if there is good collaboration and both ETLers and ~ETLers are focussed on the same corporate objectives.

So I suppose I’m saying that I don’t think – at least in Insurance – that this is a new trend. I can recall working this way as far back as 2000. However, what you describe is not a bad way to work, assuming that the collaboration that I mention is how the teams work.

I am aware that I must have said “collaboration” 20 times – your earlier reference to “silos” does however point to a potential flaw in such arrangements.


PS I talk more about interactions with actuarial teams in: BI and a different type of outsourcing

PPS For another perspective on this area, maybe see comments by @neilraden in his 2012 article What is a Data Scientist and what isn’t?

I think that the perspective of actuaries having been data scientists long before the latter term emerged is a sound one.

I couldn't find a suitable image from Sesame Street :-o

Although the genesis of this thread dates to over five years ago (an aeon in terms of information technology), I think that – in the current world where some aspects of the old divide between technically savvy users [4] and IT staff with strong business knowledge [5] has begun to disappear – there is both an opportunity for businesses and a threat. If silos develop and the skills of a range of different people are not combined effectively, then we have a situation where:

| ETL World | + | ~ETL World | < | ETL World ∪ ~ETL World |

If instead collaboration, transparency and teamwork govern interactions between different sets of people then the equation flips to become:

| ETL World | + | ~ETL World | ≥ | ETL World ∪ ~ETL World |

Perhaps the way that Actuarial and IT departments work together in enlightened insurance companies points the way to a general solution for the organisational dynamics of modern information provision. Maybe also the, by now somewhat venerable, concept of a Business Intelligence Competency Centre, a unified team combining the best and brightest from many fields, is an idea whose time has come.

A link to the actual discussion thread is provided here. However You need to be a member of the TDWI Group to view this.
Anyone interested in ancient history is welcome to take a look at the following articles from a few years back:

  1. Business Analytics vs Business Intelligence
  2. A business intelligence parable
  3. The Dictatorship of the Analysts
I have mildly edited the text from its original form and added some new links and new images to provide context.
Particularly those with a background in quantitative methods – what we now call data scientists
Many of whom seem equally keen to also call themselves data scientists



Ten Million Aliens – More musings on BI-ology


Ten Million Aliens by Simon Barnes

This article relates to the book Ten Million Aliens – A Journey Through the Entire Animal Kingdom by British journalist and author Simon Barnes, but is not specifically a book review. My actual review of this entertaining and informative work appears on Amazon and is as follows:

Having enjoyed Simon’s sport journalism (particularly his insightful and amusing commentary on Test Match cricket) for many years, I was interested to learn about this new book via his web-site. As an avid consumer of pop-science literature and already being aware of Simon’s considerable abilities as a writer, I was keen to read Ten Million Aliens. To be brief, I would recommend the book to anyone with an enquiring mind, an interest in the natural world and its endless variety, or just an affection for good science writing. My only sadness was that the number of phyla eventually had to come to an end. I laughed in places, I was better informed than before reading a chapter in others and the autobiographical anecdotes and other general commentary on the state of our stewardship of the planet added further dimensions. I look forward to Simon’s next book.

Instead this piece contains some general musings which came to mind while reading Ten Million Aliens and – as is customary – applies some of these to my own fields of professional endeavour.
Some Background

David Ivon Gower

Regular readers of this blog will be aware of my affection for Cricket[1] and also my interest in Science[2]. Simon Barnes’s work spans both of these passions. I became familiar with Simon’s journalism when he was Chief Sports Writer for The Times[3] an organ he wrote for over 32 years. Given my own sporting interests, I first read his articles specifically about Cricket and sometimes Rugby Union, but began to appreciate his writing in general and to consume his thoughts on many other sports.

There is something about Simon’s writing which I (and no doubt many others) find very engaging. He manages to be both insightful and amusing and displays both elegance of phrase and erudition without ever seeming to show off, or to descend into the overly-florid prose of which I can sometimes (OK often) be guilty. It also helps that we seem to share a favourite cricketer in the shape of David Gower, who appears above and was the most graceful bastman to have played for England in the last forty years. However, it is not Simon’s peerless sports writing that I am going to focus on here. For several years he also penned a wildlife column for The Times and is a patron of a number of wildlife charities. He has written books on, amongst other topics, birds, horses, his safari experiences and conservation in general.

Green Finch, Great Tit, Lesser Spotted Woodpecker, Tawny Owl, Magpie, Carrion Crow, Eurasian Jay, Jackdaw

My own interest in science merges into an appreciation of the natural world, perhaps partly also related to the amount of time I have spent in remote and wild places rock-climbing and bouldering. As I started to write this piece, some welcome November Cambridge sun threw shadows of the Green Finches and Great Tits on our feeders across the monitor. Earlier in the day, my wife and I managed to catch a Lesser Spotted Woodpecker, helping itself to our peanuts. Last night we stood on our balcony listening to two Tawny Owls serenading each other. Our favourite Corvidae family are also very common around here and we have had each of the birds appearing in the bottom row of the above image on our balcony at some point. My affection for living dinosaurs also extends to their cousins, the herpetiles, but that is perhaps a topic for another day.

Ten Million Aliens has the modest objectives, revealed by its sub-title, of saying something interesting about about each of the (at the last count) thirty-five phyla of the Animal Kingdom[4] and of providing some insights in to a few of the thousands of familes and species that make these up. Simon’s boundless enthusiasm for the life he sees around him (and indeed the life that is often hidden from all bar the most intrepid of researchers), his ability to bring even what might be viewed as ostensibly dull subject matter[5] to life and a seemingly limitless trove of pertinent personal anecdotes, all combine to ensure not only that he achieves these objectives, but that he does so with some élan.
Classifications and Hierarchies

Biological- Classification

Well having said that this article wasn’t going to be a book review, I guess it has borne a striking resemblance to one so far. Now to take a different tack; one which relates to three of the words that I referenced and provided links to in the last paragraph of the previous section: phylum, family and species. These are all levels in the general classification of life. At least one version of where these three levels fit into the overall scheme of things appears in the image above[6]. Some readers may even be able to recall a related mnemonic from years gone by: Kings Play Chess on Fine Green Sand[7].

The father of modern taxonomy, Carl Linnaeus, founded his original biological classification – not unreasonably – on the shared characteristics of organisms; things that look similar are probably related. Relations mean that like things can be collected together into groups and that the groups can be further consolidated into super-groups. This approach served science well for a long time. However when researchers began to find more and more examples of convergent evolution[8], Linnaeus’s rule of thumb was seen to not always apply and complementary approaches also began to be adopted.


One of these approaches, called Cladistics, focuses on common ancestors rather than shared physical characteristics. Breakthroughs in understanding the genetic code provided impetus to this technique. The above diagram, referred to as a cladogram, represents one school of thought about the relationship between avian dinosaurs, non-avian dinosaurs and various other reptiles that I mentioned above.

It is at this point that the Business Intelligence professional may begin to detect something somewhat familiar[9]. I am of course talking about both dimensions and organising these into hierarchies. Dimensions are the atoms of Business Intelligence and Data Warehousing[10]. In Biological Classification: H. sapiens is part of Homo , which is part of Hominidae, which is part of Primates, which is part of Mammalia, which is part of Chordata, which then gets us back up to Animalia[11]. In Business Intelligence: Individuals make up Teams, which make up Offices, which make up Countries and Regions.

Above I references different approaches to Biological Classification, one based on shared attributes, the other on homology of DNA. This also reminds me of the multiple ways to roll-up dimensions. To pick the most obvious, Day rolls up to Month, Quarter, Half-Year and Year; but also in a different manner to Week and then Year. Given that the aforementioned DNA evidence has caused a reappraisal of the connections between many groups of animals, the structures of Biological Classification are not rigid and instead can change over time[12]. Different approaches to grouping living organisms can provide a range of perspectives, each with its own benefits. In a similar way, good BI/DW design practices should account for both dimensions changing and the fact that different insights may well be provided by parallel dimension hierarchies.

In summary, I suppose what I am saying is that BI/DW practitioners, as well as studying the works of Inmon and Kimball, might want to consider expanding their horizons to include Barnes; to say nothing of Linnaeus[13]. They might find something instructive in these other taxonomical works.


Articles from this blog in which I intertwine Cricket and aspects of business, technology and change include (in chronological order):

Articles on this site which reference either Science or Mathematics are far too numerous to list in full. A short selection of the ones I enjoyed writing most would include (again in chronological order):

Or perhaps The London Times for non-British readers, despite the fact that it was the first newspaper to bear that name.
Here “Aninal Kingdom” is used in the taxonomical sense and refers to Animalia.
For an example of the transformation of initially unpromising material, perhaps check out the chapter of Ten Million Aliens devoted to Entoprocta.
With acknowledgment to The Font.
Though this elides both Domains and Johny-come-latelies like super-families, sub-genuses and hyper-orders [I may have made that last one up of course].
For example the wings of Pterosaurs, Birds and Bats.
No pun intended.
This metaphor becomes rather cumbersome when one tries to extend it to cover measures. It’s tempting to perhaps align these with fundamental forces, and thus bosons as opposed to combinations of fermions, but the analogy breaks down pretty quickly, so let’s conveniently forget that multidimensional data structures have fact tables at their hearts for now.
Here I am going to strive manfully to avoid getting embroiled in discussions about domains, superregnums, superkingdoms, empires, or regios and instead leave the interested reader to explore these areas themselves if they so desire. Ten Million Aliens itself could be one good starting point, as could the following link.
Science is yet to determine whether these slowly changing dimensions are of Type 1, 2, 3 or 4 (it has however been definitively established that they are not Type 6 / Hybrid).
Interesting fact of the day: Linnaeus’s seminal work included an entry for The Kraken, under Cephalopoda



A Dictionary of the Business Intelligence Language

Software Advice article

Michael Koploy of on-line technology consulting company Software Advice recently asked me, together with four other people from the Business Intelligence / Data Warehousing community, to contribute some definitions of commonly-used technology jargon pertinent to our field. The results can be viewed in his article, BI Buzzword Breakdown. Readers may be interested in the differing, but hopefully complementary, definitions that were offered.

In jockeying for space with my industry associates, only one of my definitions (that relating to Data Mining) was used. Here are two others, which were left on the cutting room floor. Maybe they’ll make it to the DVD extras.
The equivalent of the Unicorn dream sequence in Bladerunner, but imbued with greater dramatic meaning...

Big Data Rather than having the entirely obvious meaning, has come to be associated with a set of technologies, some of them open source, that emerged from the needs of several of the major on-line businesses (Google, Yahoo, Facebook and Amazon) to analyse the large amount of data they had relating to how people interact with their web-sites. The area is often linked to Apache Hadoop, a low-cost technology that allows commodity servers to be combined to collectively to store large amounts of data, particularly where the structure of these varies considerably and particularly where there is a need to support unpredictably-growing volumes.
Data Warehouse A collection of data, generally emanating from a number of different systems, which is combined to form a consistent structure suitable for the support of a variety of reporting and analytical needs. Most warehouses will have an element of data stored in a multi-dimensional format; i.e. one that is intended to support pivot-table like slicing and dicing. This is achieved using specific data structures: Fact tables, which hold figures, or measures (like profit, or sales, or growth); and dimension tables, which hold business entities, or dimensions (like countries, weeks, product lines, salesman etc.). The dimensions are often nested into hierarchies, such as Region => Country => City => Area. Warehouse data is generally leveraged using traditional reports, On-Line Analytical Processing (OLAP) and more advanced analytical approaches, such as data mining.

Approximately 5.5 cm isn't THAT big is it?

The above comments are perhaps most notable for representing my first reference to the latest information hot topic, the rather misleadingly named Big Data. To date I have rather avoided the rampaging herd in this area – maybe through fear of being crushed in the stampede – but it is probably a topic to which I will return once there is less hype and more substance to comment on.

I will be presenting at the IRM European Data Warehouse and Business Intelligence Conference

IRM UK - European Data Warehousing and Business Intelligence Conference - 2011

This IRM UK event will be taking place in central London from the 7th to 9th November 2011. It is co-located with the IRM Data Management & Information Quality Conference. Full details may be obtained from the IRM conference web-site here. I am speaking on the morning of the 9th and will be building on themes introduced in my previous artcile: A Single Version of the Truth?


Using historical data to justify BI investments – Part III

The earliest recorded surd

This article completes the three-part series which started with Using historical data to justify BI investments – Part I and continued (somewhat inevitably) with Using historical data to justify BI investments – Part II. Having presented a worked example, which focused on using historical data both to develop a profit-enhancing rule and then to test its efficacy, this final section considers the implications for justifying Business Intelligence / Data Warehouse programmes and touches on some more general issues.
The Business Intelligence angle

In my experience when talking to people about the example I have just shared, there can be an initial “so what?” reaction. It can maybe seem that we have simply adopted the all-too-frequently-employed business ruse of accentuating the good and down-playing the bad. Who has not heard colleagues say “this was a great month excluding the impact of X, Y and Z”? Of course the implication is that when you include X, Y and Z, it would probably be a much less great month; but this is not what we have done.

One goal of business intelligence is to help in estimating what is likely to happen in the future and guiding users in taking decisions today that will influence this. What we have really done in the above example is as follows:

Look out Morlocks, here I come... [alumni of Imperial College London are so creative aren't they?]

  1. shift “now” back two years in time
  2. pretend we know nothing about what has happened in these most recent two years
  3. develop a predictive rule based solely on the three years preceding our back-shifted “now”
  4. then use the most recent two years (the ones we have metaphorically been covering with our hand) to see whether our proposed rule would have been efficacious

For the avoidance of doubt, in the previously attached example, the losses incurred in 2009 – 2010 have absolutely no influence on the rule we adopt, this is based solely on 2006 – 2008 losses. All the 2009 – 2010 losses are used for is to validate our rule.

We have therefore achieved two things:

  1. Established that better decisions could have been taken historically at the juncture of 2008 and 2009
  2. Devised a rule that would have been more effective and displayed at least some indication that this could work going forward in 2011 and beyond

From a Business Intelligence / Data Warehousing perspective, the general pitch is then something like:

Eight out of ten cats said that their owners got rid of stubborn stains no other technology could shift with BI - now with added BA

  1. if we can mechanically take such decisions, based on a very non-sophisticated analysis of data, then if we make even simple information available to the humans taking decisions (i.e. basic BI), then surely the quality of their decision-making will improve
  2. If we go beyond this to provide more sophisticated analyses (e.g. including industry segmentation, analysis of insured attributes, specific products sold etc., i.e. regular BI) then we can – by extrapolation from the example – better shape the evolution of the performance of whole books of business
  3. We can also monitor the decisions taken to determine the relative effectiveness of individuals and teams and compare these to their peers – ideally these comparisons would also be made available to the individuals and teams themselves, allowing them to assess their relative performance (again regular BI)
  4. Finally, we can also use more sophisticated approaches, such as statistical modelling to tease out trends and artefacts that would not be easily apparent when using a standard numeric or graphical approach (i.e. sophisticated BI, though others might use the terms “data mining”, “pattern recognition” or the now ubiquitous marketing term “analytics”)

The example also says something else – although we may already have reporting tools, analysis capabilities and even people dabbling in statistical modelling, it appears that there is room for improvement in our approach. The 2009 – 2010 loss ratio was 54% and it could have been closer to 40%. Thus what we are doing now is demonstrably not as good as it could be and the monetary value of making a stepped change in information capabilities can be estimated.

The generation of which should be the object of any BI/DW project worth its salt - thinking of which, maybe a mound of salt would also have worked as an illustration

In the example, we are talking about £1m of biannual premium and £88k of increased profit. What would be the impact of better information on an annual book of £1bn premium? Assuming a linear relationship and using some advanced Mathematics, we might suggest £44m. What is more, these gains would not be one-off, but repeatable every year. Even if we moderate our projected payback to a more conservative figure, our exercise implies that we would be not out of line to suggest say an ongoing annual payback of £10m. These are numbers and concepts which are likely to resonate with Executive decision-makers.

To put it even more directly an increase of £10m a year in profits would quickly swamp the cost of a BI/DW programme in very substantial benefits. These are payback ratios that most IT managers can only dream of.

As an aside, it may have occurred to readers that the mechanistic rule is actually rather good and – if so – why exactly do we need the underwriters? Taking to one side examples of solely rule-based decision-making going somewhat awry (LTCM anyone?) the human angle is often necessary in messy things like business acquisition and maintaining relationships. Maybe because of this, very few insurance organisations are relying on rules to take all decisions. However it is increasingly common for rules to play some role in their overall approach. This is likely to take the form of triage of some sort. For example:

  1. A rule – maybe not much more sophisticated than the one I describe above – is established and run over policies before renewal.
  2. This is used to score polices as maybe having green, amber or red lights associated with them.
  3. Green policies may be automatically renewed with no intervention from human staff
  4. Amber polices may be looked at by junior staff, who may either OK the renewal if they satisfy themselves that the issues picked up are minor, or refer it to more senior and experienced colleagues if they remain concerned
  5. Red policies go straight to the most experienced staff for their close attention

In this way process efficiencies are gained. Staff time is only applied where it is necessary and the most expensive resources are applied to those cases that most merit their abilities.


From the webcomic of the inimitable Randall Munroe - his mouse-over text is a lot better than mine BTW

Let’s pause for a moment and consider the Insurance example a little more closely. What has actually happened? Well we seem to have established that performance of policies in 2006 – 2008 is at least a reasonable predictor of performance of the same policies in 2009 – 2010. Taking the mutual fund vendors’ constant reminder that past performance does not indicate future performance to one side, what does this actually mean?

What we have done is to establish a loose correlation between 2006 – 2008 and 2009 – 2010 loss ratios. But I also mentioned a while back that I had fabricated the figures, so how does that work? In the same section, I also said that the figures contained an intentional bias. I didn’t adjust my figures to make the year-on-year comparison work out. However, at the policy level, I was guilty of making the numbers look like the type of results that I have seen with real policies (albeit of a specific type). Hopefully I was reasonably realistic about this. If every policy that was bad in 2006 – 2008 continued in exactly the same vein in 2009 – 2010 (and vice versa) then my good segment would have dropped from an overall loss ratio of 54% to considerably less than 40%. The actual distribution of losses is representative of real Insurance portfolios that I have analysed. It is worth noting that only a small bias towards policies that start bad continuing to be bad is enough for our rule to work and profits to be improved. Close scrutiny of the list of policies will reveal that I intentionally introduced several counter-examples to our rule; good business going bad and vice versa. This is just as it would be in a real book of business.

Not strongly correlated

Rather than continuing to justify my methodology, I’ll make two statements:

  1. I have carried out the above sort of analysis on multiple books of Insurance business and come up with comparable results; sometimes the implied benefit is greater, sometimes it is less, but it has been there without exception (of course statistics being what it is, if I did the analysis frequently enough I would find just such an exception!).
  2. More mathematically speaking, the actual figure for the correlation between the two sets of years is a less than stellar 0.44. Of course a figure of 1 (or indeed -1) would imply total correlation, and one of 0 would imply a complete lack of correlation, so I am not working with doctored figures. Even a very mild correlation in data sets (one much less than the threshold for establishing statistical dependence) can still yield a significant impact on profit.

Closing thoughts

Ground floor: Perfumery, Stationery and leather goods, Wigs and haberdashery, Kitchenware and food…. Going up!

Having gone into a lot of detail over the course of these three articles, I wanted to step back and assess what we have covered. Although the worked-example was drawn from my experience in Insurance, there are some generic learnings to be made.

Broadly I hope that I have shown that – at least in Insurance, but I would argue with wider applicability – it is possible to use the past to infer what actions we should take in the future. By a slight tweak of timeframes, we can even take some steps to validate approaches suggested by our information. It is important that we remember that the type of basic analysis I have carried out is not guaranteed to work. The same can be said of the most advanced statistical models; both will give you some indication of what may happen and how likely this is to occur, but neither of them is foolproof. However, either of these approaches has more chance of being valuable than, for example, solely applying instinct, or making decisions at random.

In Patterns, patterns everywhere, I wrote about the dangers associated with making predictions about events are essentially unpredictable. This is another caveat to be born in mind. However, to balance this it is worth reiterating that even partial correlation can lead to establishing rules (or more sophisticated models) that can have a very positive impact.

While any approach based on analysis or statistics will have challenges and need careful treatment, I hope that my example shows that the option of doing nothing, of continuing to do things how they have been done before, is often fraught with even more problems. In the case of Insurance at least – and I suspect in many other industries – the risks associated with using historical data to make predictions about the future are, in my opinion, outweighed by the risks of not doing this; on average of course!

But then 1=2 for very large values of 1

Using historical data to justify BI investments – Part II

The earliest recorded surd

This article is the second in what has now expanded from a two-part series to a three-part one. This started with Using historical data to justify BI investments – Part I and finishes with Using historical data to justify BI investments – Part III (once again exhibiting my talent for selecting buzzy blog post titles).
Introduction and some belated acknowledgements

The intent of these three pieces is to present a fairly simple technique by which existing, historical data can be used to provide one element of the justification for a Business Intelligence / Data Warehousing programme. Although the specific example I will cover applies to Insurance (and indeed I spent much of the previous, introductory segment discussing some Insurance-specific concepts which are referred to below), my hope is that readers from other sectors (or whose work crosses multiple sectors) will be able to gain something from what I write. My learnings from this period of my career have certainly informed my subsequent work and I will touch on more general issues in the third and final section.

This second piece will focus on the actual insurance example. The third will relate the example to justifying BI/DW programmes and, as mentioned above, also consider the area more generally.

Before starting on this second instalment in earnest, I wanted to pause and mention a couple of things. At the beginning of the last article, I referenced one reason for me choosing to put fingertip to keyboard now, namely me briefly referring to my work in this area in my interview with Microsoft’s Bruno Aziza (@brunoaziza). There were a couple of other drivers, which I feel rather remiss to have not mentioned earlier.

First, James Taylor (@jamet123) recently published his own series of articles about the use of BI in Insurance. I have browsed these and fully intend to go back and read them more carefully in the near future. I respect James and his thoughts brought some of my own Insurance experiences to the fore of my mind.

Second, I recently posted some reflections on my presentation at the IRM MDM / Data Governance seminar. These focussed on one issue that was highlighted in the post-presentation discussion. The approach to justifying BI/DW investments that I will outline shortly also came up during these conversations and this fact provided additional impetus for me to share my ideas more widely.
Winners and losers

Before him all the nations will be gathered, and he will separate them one from another, as a shepherd separates the sheep from the goats

The main concept that I will look to explain is based on dividing sheep from goats. The idea is to look at a set of policies that make up a book of insurance business and determine whether there is some simple factor that can be used to predict their performance and split them into good and bad segments.

In order to do this, it is necessary to select policies that have the following characteristics:

  1. Having been continuously renewed so that they at least cover a contiguous five-year period (policies that have been “in force” for five years in Insurance parlance).

    The reason for this is that we are going to divide this five-year term into two pieces (the first three and the final two years) and treat these differently.

  2. Ideally with the above mentioned five-year period terminating in the most recent complete year – at the time of writing 2010.

    This is so that the associated loss ratios better reflect current market conditions.

  3. Being short-tail policies.

    I explained this concept last time round. Short-tail policies (or lines or business) are ones in which any claims are highly likely to be reported as soon as they occur (for example property or accident insurance).

    These policies tend to have a low contribution from IBNR (again see the previous piece for a definition). In practice this means that we can use the simplest of the Insurance ratios, paid loss-ratio (i.e. simply Claims divided by Premium), with some confidence that it will capture most of the losses that will be attached to the policy, even if we are talking about say 2010.

    Another way of looking at this is that (borrowing an idea discussed last time round) for this type of policy the Underwriting Year and Calendar Year treatments are closer than in areas where claims may be reported many years after the policy was in force.

Before proceeding further, it perhaps helps to make things more concrete. To achieve this, you can download a spreadsheet containing a sample set of Insurance policies, together with their premiums and losses over a five-year period from 2006 to 2010 by clicking here (this is in Office 97-2003 format – if you would prefer, there is also a PDF version available here). Hopefully you will be able to follow my logic from the text alone, but the figures may help.

A few comments about the spreadsheet. First these are entirely fabricated policies and are not even loosely based on any data set that I have worked with before. Second I have also adopted a number of simplifications:

  1. There are only 50 policies, normally many thousand would be examined.
  2. Each policy has the same annual premium – £10,000 (I am British!) – and this premium does not change over the five years being considered. In reality these would vary immensely according to changes in cover and the insurer’s pricing strategy.
  3. I have entirely omitted dates. In practice not every policy will fit neatly into a year and account will normally need to be taken of this fact.
  4. Given that this is a fabricated dataset, the claims activity has not been generated randomly. Instead I have simply selected values (though I did perform a retrospective sense check as to their distribution). While this example is not meant to 100% reflect reality, there is an intentional bias in the figures; one that I will come back to later.

The sheet also calculates the policy paid loss ratio for each year and figures for the whole portfolio appear at the bottom. While the in-year performance of any particular policy can gyrate considerably, it may be seen from the aggregate figures that overall performance of this rather small book of business is relatively consistent:

Year Paid Loss Ratio
2006 53%
2007 59%
2008 54%
2009 53%
2010 54%
Total 54%

Above I mentioned looking at the five years in two parts. At least metaphorically we are going to use our right hand to cover the results from years 2009 and 2010 and focus on the first three years on the left. Later – after we have established a hypothesis based on 2006 to 2008 results – we can lift our hand and check how we did against the “real” figures.

For the purposes of this illustration, I want to choose a rather mechanistic way to differentiate business that has performed well and badly. In doing this I have to remember that a policy may have a single major loss one year and then run free of losses for the next 20. If I was simply to say any policy with a large loss is bad, I am potentially drastically and unnecessarily culling my book (and also closing the stable door after the horse has bolted). Instead we need to develop a rule that takes this into account.

In thinking about overall profitability, while we have greatly reduced the impact of both reported but unpaid claims and IBNR by virtue of picking a short-tail business, it might be prudent to make say a 5% allowance for these. If we also assume an expense ratio of 35%, then we have a total of non-underwriting-related outgoings of 40%. This means that we can afford to have a paid loss ratio of up to 60% (100% – 40%) and still turn a profit.

Using this insight, my simple rule is as follows:

A policy will be tagged as “bad” if two things occur:

  1. The overall three-year loss ratio is in excess of 60%

    i.e. is has been unprofitable over this period; and

  2. The loss ratio is in excess of 30% in at least two of the three years

    i.e. there is a sustained element to the poor performance and not just the one-off bad luck that can hit the best underwritten of policies

This rule roughly splits the book 75 / 25; with 74% of policies being good. Other choices of parameters may result in other splits and it would be advisable spending a little time optimising things. Perhaps 26% of policies being flagged as bad is too aggressive for example (though this rather depends on what you do about them – see below). However in the simpler world of this example, I’ll press on to the next stage with my first pick.

The ultimate sense of perspective

Well all we have done so far is to tag policies that have performed badly – in the parlance of Analytics zealots we are being backward-looking. Now it is time to lift our hand on 2009 to 2010 and try to be forward-looking. While these figures are obviously also backward looking (the day that someone comes up with future data I will eat my hat), from the frame of reference of our experimental perspective (sitting at the close of 2008), they can be thought of as “the future back then”. We will use the actual performance of the policies in 2009 – 2010 to validate our choice of good and bad that was based on 2006 – 2008 results.

Overall the 50 policies had a loss ratio of 54% in 2009 – 2010. However those flagged as bad in our above exercise had a subsequent loss ratio of 92%. Those flagged as good had a subsequent loss ratio of 40%. The latter is a 14 point improvement on the overall performance of the book.

So we can say with some certainly that our rule, though simplistic, has produced some interesting results. The third part of this series will focus more closely on why this has worked. For now, let’s consider what actions the split we have established could drive.
What to do with the bad?

You shall be taken to the place from whence you came...

We were running a 54% paid ratio in 2009-2010. Using the same assumptions as above, this might have equated to a 94% combined ratio. Our book of business had an annual premium of £0.5m so we received £1m over the two years. The 94% combined would have implied making a £60k profit if we had done nothing different. So what might have happened if we had done something?

There are a number of options. The most radical of these would have been to not renew any of the bad policies; to have carried out a cull. Let us consider what would have been the impact of such an approach. Well our book of business would have shrunk to £740k over the two years at a combined of 40% (the ratio of the good book) + 40% (other outgoing) = 80%, which implies a profit of £148k, up £88k. However there are reasons why we might not have wanted to so drastically shrink our business. A smaller pot of money for investment purposes might have been one. Also we might have had customers with policies in both the good and bad segments and it might have been tricky to cancel the bad while retaining the good. And so on…

Another option would have been to have refined our rule to catch fewer policies. Inevitably, however, this would have reduced the positive impact on profits.

At the other extreme, we might have chosen to take less drastic action relating to the bad policies. This could have included increasing the premium we charged (which of course could also have resulted in us losing the business but via the insured’s choice), raising the deductible payable on any losses, or looking to work with insureds to put in place better risk management processes. Let’s be conservative and say that if the bad book was running at 92% and the overall book at 54% then perhaps it would have been feasible to improve the bad book’s performance to a neutral figure of say 60% (implying a break-even combined of 100%). This would have enabled the insurance organisation to maintain its investment base, to have not lost good business as a result of culling related bad and to have preserved the profit increase generated by the cull.

In practice of course it is likely that some sort of mixed approach would have been taken. The general point is that we have been able to come up with a simple strategy to separate good and bad business and then been able to validate how accurate our choices were. If, in the future, we possessed similar information, then there is ample scope for better decisions to be taken, with potentially positive impact on profits.
Next time…

In the final part of what is now a trilogy, I will look more deeply at what we have learnt from the above example, tie these learnings into how to pitch a BI/DW programme in Insurance and make some more general observations.

Using historical data to justify BI investments – Part I

The earliest recorded surd

This is the first of what was originally a two part piece that has now expanded into three. In the initial chapter, I provide some background on Insurance industry concepts and practices. These are built on in the second chapter (Using historical data to justify BI investments – Part II), in which I offer an Insurance-based worked example. In the final piece, which is cunningly named Part III, I will explain how such an approach to analysing historical data can be used to justify BI investments.

Readers who are already au fait with insurance may choose to wait for the next instalment.

Quite some time ago, when I wrote Measuring the Benefits of Business Intelligence, I mentioned that, in some circumstances, I had been able to leverage historical data (is there any other kind?) to justify Business Intelligence investments. I briefly touched on this area in my recent interview with Microsoft’s Bruno Aziza (@brunoaziza) and thought that it was well past time me writing more fully on the topic.

My general approach applies where there are periodic decisions to be made about a business relationship and where how that relationship has performed in the past informs these decisions. These criteria particularly pertain to the industry in which I ran my first BI / DW project; commercial property and casualty insurance. While I hope that users from other sectors may be able to extrapolate my example to apply to them, it is to insurance that I will turn to explain what I did.
An insurance primer

I have always wanted to launch a '[...] for Pacifiers' series in the US

My previous article, The Specific Benefits of Business Intelligence in Insurance, starts with a widely used and pig-related (no typo) explanation of how insurance works, both for the insurer and the insured. I won’t repeat this here, but if you are unfamiliar with the area I recommend you taking a look first.

Although of course there are exceptions (event related insurance for example), many commercial insurance policies – just like those that most of us purchase in our personal lives to cover cars and property – have an annual term after which either party can decide whether or not to renew the cover. At renewal, as in the pig example, the insurer will first of all want to assess whether or not they have received more money than they have paid out over the past year. However, the entire point of insurance is that sometimes an event occurs which requires the insurer to give the insured a sum in excess of the premium that they have paid in a given year (or indeed over many years). The insurer is therefore less interested in whether a particular year has been bad – from their perspective – than whether the overall relationship has been, or will become, bad. Perhaps I am over simplifying, but if in most years the insurer pays out less in settling claims than they receive in premium (or ideally there are no claims at all) and if one bad year’s claims are unlikely to negate the benefits accrued in the normal years, then this is good business for the insurer.
Some rational comments

The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift

I have bandied about a number of rather woolly concepts in the previous section which include: how much money the insured has paid out and how much they have taken in. Of course these things tend to be more complicated. On the simpler side of the equation, broadly speaking, money coming in is from the insurance premiums paid by customers (but see also the box appearing below).

Investment income

Some insurers are actually relatively relaxed about paying out more in claims that they receive in premium over the life of a policy. This is because of timing differences. So long as the claims are settled some time after premium is received and so long as there are relatively lucrative investment opportunities (remember that?), it may be that the investment income that the insurer can generate while it has use of the insured’s premium will more than compensate for what might be termed an operating loss on the policy. Equally some insurers will have the business goal of – at least in aggregate – always having premiums exceeding claims and thus making a profit on their core underwriting activities. In this case any investment income is added to the underwriting-related profits, rather than compensating for underwriting-related losses. I won’t complicate this article any further by including investment income, but it is a factor in the profitability of insurance companies.

Equally broadly speaking, money going out is normally in six categories:

  1. settlement of claims – often referred to as case payments
  2. claims adjusters’ estimates for the settlement of specific claims that have been notified to the insurer, but not as yet paid – often referred to as case reserves
  3. actuarial estimates of insurance events that have occurred, but which have not yet been reported to the insurer – generally known as incurred but not reported losses, or IBNR (more on this later)
  4. fees paid to insurance intermediaries for placing their clients’ business with the carrier – commission
  5. premiums paid to other organisations to transfer some of the risk associated with specific policies, or baskets of types of policies – facultative or treaty reinsurance
  6. the general expense of being in business (staff, premises, consumables, equipment, IT, advertising, uncollectable premiums etc.)

In the cause of clarity, I will lump commission, reinsurance and the general expense of being in business into Other Expenses for what follows. However please bear in mind that, as is often the case in life, things are not as simple as I will make them out to be.

Rather than dealing in monetary units, insurance companies like percentages; though they then insist on referring to these as ratios. Taking the above categories of money flowing in and out of an insurance company, the main ratios that they consider are then:

Insurance Ratios
Incurred but not reported

Not sure whether the Nixon administration set up any Watergate-related reserves

This concept requires a short diversion as later on I will exclude it from our discussions and will need to explain why. There are some interesting time lags in insurance. Take the sad case of asbestosis (also mentioned in my previous article). Here those unfortunately exposed developed symptoms of the disease in some cases many years later. However if their exposure was in say 1972, they would be covered by whatever Employers Liability policy their organisation held or whatever personal policy they held in the case of the self-employed. An asbestosis sufferer may have changed insurance company ten times since their exposure, but it is the insurance company who provided cover at the time who is liable for any claims.

Rather than waiting for such claims to emerge, insurance companies follow the best practise of recognising liabilities at the earliest point. Because of this, they set up estimated reserves for claims that they may receive in future years (or decades) and apply these to the year in which the policy was in force. Of course in some lines of business, say Property cover, most claims are reported as soon as they occur and so IBNR reserves are low. However in others, say Directors and Officers Liability, or the Employers Liability mentioned above, claims may arise many years hence and IBNR can be a big factor in results.

It should be stressed that IBNR is seldom calculated for a single policy (though it is conceivable that this would happen on a very large risk). Instead it is estimated for classes of policies, often grouped into lines of business, and the same “rate” of IBNR is applied across the board. Of course IBNR is calculated based on experience of losses in the same baskets of policies in previous years, adjusted to take account of current differences (e.g. more or less favourable economic conditions for Directors and Officers Liability, or maybe rising or falling property indeces for Property).

For reasons that are probably obvious, lines of business where most claims are promptly reported (i.e. low IBNR) are called short-tail lines. Those where claims may emerge some time after the period covered by the policy (i.e. high IBNR) are called long-tail lines. Later on I will be focussing just on short-tail business.

[Incidentally, improving this process of estimation is one of the specific benefits of Business Intelligence in insurance that I highlighted in my previous article.]
Underwriting Year

Fundamental particles of the Underwriting Year

Something else may have occurred to readers when considering the time lags that I reference in the previous section, namely that while a policy may last from say 1st January 2006 to 31st December 2006, claims against this may occur either during this period, or after it. The financial statements of an insurance company will place claims in the period that they are notified or settled. So in the above example, a claim paid on 23rd April 2008 (assuming the financial and calendar years coincide) will be reflected in the 2008 report and accounts.

However it is often useful for analysis purposes to lump together all of the claims relating to a policy and associate these with the year in which it was written. Again in our example this would mean our 23rd April 2008 claim would be recorded in the Underwriting Year of 2006. So an Underwriting Year report comparing 2006 and 2007 say would have the premium for all policies written in 2006 and all the claims against these policies – regardless of when they occur – compared to the premium for 2007 and all the claims against these policies, whenever they occur.

Because of this, Underwriting Year reports provide a good measure of the performance of policies (or books of business) over time, regardless of how associated losses are dispersed. By contrast Calendar Year (i.e. financial) reports will often have premium from policies written in say 2010 combined with losses from policies written in say 2000 – 2010.
Tune in next time…

BBC ANNOUNCER: Tune in to the next exciting instalment of... CAST: Dick Barton, Special Agent!

Having laid some foundations, in the next article, I will draw on the various concepts that I have introduced above to offer a worked example. In the closing chapter, I will explain how I such an example to justify a major, multi-year Business Intelligence / Data Warehousing programme within the insurance industry.

An informed decision

Caterham 7 vs Data Warehouse appliance - spot the difference

A friend and fellow information professional is currently responsible for both building a new data warehouse and supporting its predecessor, which is based on a different technology platform. In these times of ever-increasing focus on costs, she had been asked to port the old warehouse to the new platform, thereby avoiding some licensing payments. She asked me what I thought about this idea and we chatted for a while. For some reason, our conversation went off at a bit of a tangent and I started to tell her the story of an acquaintance of mine and his recent sad experiences.


My acquaintance, let’s call him Jim to avoid causing any embarassment, had always been interested in cars; driving them, maintaining them, souping them up, endlessly reading car magazines and so on. His dream had always been to build his own car and his eye had always been on a Caterham kit. I suppose for him the pleasure of making a car was at least as great, if not more, as the pleasure of driving one.

It's just like Lego

Jim saved his pennies and eventually got together enough cash to embark on his dream project. Having invested his money, he started to also invest his time and effort. However, after a few weeks of toil, he hit a snag. It was nothing to do with his slowly emerging Caterham, but to do with the more quotidian car he used for his daily commute to work. Its engine had developed a couple of niggles that had been resistant to his own attempts to fix them and he had reluctantly decided that it was in need of some new parts and quite expensive ones at that. Jim had already spent quite a bit of cash on the Caterham and more on some new tools that he needed to assemble it. The last thing he wanted to do now was to have a major outlay on his old car; particularly because, once the Caterham was finished, he had planned to trade it for its scrap-metal worth.

But now things got worse, Jim’s current car failed its MOT (vehicle safety inspection for any non-UK readers) because the faulty engine did not meet emission standards. However, one of his friends came up with a potential solution. He said, “As you have already assembled the Caterham engine, why not put this into your current car and use this instead? You can then swap it out into the Caterham chassis and body when you have built this.”

Headless Jim - with cropped face to protect his anonymity

This sounded like a great idea to Jim, but there were some issues with it. His Cateham was supplied with a Cosworth-developed 2.3-litre Ford Duratec engine. This four-cylinder twin cam unit was the wrong size and shape to fit into the cavity left by removing the worn-out engine from his commuting car. Well as I had mentioned at the start, Jim was a pretty competent amateur mechanic and he thought that he had a good chance of rising to the challenge. He was motivated by the thought of not having to shell out extra cash and in any case he loved tinkering with cars.

So he put in some new brackets to hold the Caterham engine. He then had to grind-down a couple of protruding pieces of the Duratec block to gain the extra 5 mm necessary to squeeze it in. The fuel feeds were in the wrong place, but a bit of plumbing and that was also sorted. Perhaps this might cause an issue with efficiency of the engine burn cycle, but Jim figured that it would probably be OK. Next the vibration dampers were not really up to the job of dealing with the more powerful engine and neither was the exhaust system. No worries, thought Jim, a tap of a hammer here, a bend of a pipe here and he could also add in a couple of components that had been sitting at the back of his garage rusting for years as well. Eventually everything seemed fine.

Jim ventured out of his garage in his old car, with its new engine. He was initially a bit trepidatious, but his work seemed to be hanging together. Sure the car was making a bit of a noise, shaking a bit and the oil temperature seems a bit high, but Jim felt that these were only minor problems. He told himself that all his handiwork had to do was to hang together for a few more months until he finished the rest of the Caterham.

Angular momentum = Sum over i : Ri x mi x Vi

With these nice thoughts in mind, Jim approached a bend. The car flew off the road at a tangent as he realised – too late – that he had been travelling at Caterham speeds into the corner and didn’t have a Caterham chassis, a Caterham suspension, or Caterham brakes. His old car was not up to dealing with the forces created in the turn. His tyres failed to grip and, after what seemed like an eternity of slow-motion spinning and screeching and panic, he find himself in a ditch; healthy, but with a wheel sheared off and smoke coming out of the front of the car. A later inspection confirmed that his commuting car was a write-off, and his insurance policy didn’t fully cover the cost of a new vehicle.

Jim ended up having to buy another day-to-day car, which delayed him from spending the additional money necessary to get the Caterham on the road for quite some time. However, after scrimping and saving for a while, he eventually got back to his dream project, only to find that combination of the modifications he had to make to the Duratec engine, plus the after effects of the crash meant that it was now useless and he needed to purchase a replacement.

So because Jim didn’t want to run to the expense of maintaining his old car while he built his new one, he would instead have to buy a new temporary car plus a new engine for the Caterham. Jim was just as far off from finishing the Caterham as when he had started, despite wasting a lot of time and money along the way. A very sad story.


Suddenly I realised that I had been wittering on about a wholly unrelated subject to my friend’s data warehousing problem. I apologised and turned the conversation back to this. To my astonishment, she told me that she had already made up her mind. I suppose she had taken advantage of the length of time I had spent telling Jim’s story to more profitably weigh the pros and cons of different approaches in her mind and thereby had reached her decision. Anyway, she thanked me for my help, I protested that I hadn’t really offered her any and we each went our separate ways.

I found out later she had decided to pay the maintenance costs on the old data warehouse.

I would like to apologise in advance if anyone at Caterham, Cosworth, Ford, or indeed Peugeot, takes offence to any of the content of the above story or its illustrations. I’m sure that you make very fine products and this article isn’t really about any of them.

Some thoughts on the IRM(UK) DW/BI conference

As previously advertised, I presented at the recent IRM(UK) DW/BI seminar in London. As a speaker I was entitled to attend the full three days, but as is typically the case, other work commitments meant that I only went along on the day of my session, 4th November. A mixture of running into business acquaintances, making sure that audio/visual facilities work and last minute run-throughs of my slides all conspired to ensure that I was able to listen to fewer talks that I would have liked. In comparing notes with other speakers, it is generally the same for them. Maybe I should consider attending a seminar as a delegate sometime!

Nevertheless, I did get along to some presentations and also managed to finally meet Dylan Jones of (@DataQualityPro) in person after running into each other virtually for years. Unfortunatlely, I also managed to fail to connect with a number of tweeps of my acquaintance including: Loretta Mahon Smith (@silverdata) – who even attended my talk without us bumping into each other – and Scott Davis (@scottatlyzasoft); I guess that is just how it goes with seminars sometimes.
Story-telling and Information Quality

Ma mère l'oye by Gustave Doré (for the avoidance of doubt, I'm not saying that Lori is Mother Goose)

At face value these may seem odd bed-fellows. However, Lori Silverman of Partners for Progress managed to intertwine the two effectively. This was despite being handicapped by an attack of laryngitis that meant that her, already somewhat nasal tones, from time to time morphed into a shriek. Sitting as I was directly beside a loudspeaker, I felt some initial discomfort and even considered departing for a less auricularly challenged part of the conference centre. However I was glad that I decided to tough it out because Lori turned out to be a very entertaining, engaging and insightful speaker. I won’t steal her thunder by revealing her main thesis and instead suggest that you try to catch her speaking at some future point, she is well worth listening to in my opinion.
Open Source BI makes headway in the Irish Government sector

Jaspersoft and System Dynamics

I next attended a presentation by leading open source BI company Jaspersoft. This was kicked-off by their CEO Brian Gentile who then introduced a case study about an Irish Government department rolling-out the company’s products. The implementer, was System Dynamics, Ireland’s largest indigenous IT business solutions company*.

System Dynamics CEO Tony McGuire and BI Team Lead Emmet Burke both spoke about this recent project, which covered 500+ users. Open source has traditionally had something of a challenge establishing a foothold in the public sector. The assertion made in this session was that the current fiscal challenges faced by the Irish Republic meant that it was becoming an option they were giving greater credence to. I guess, as with many areas of open source applications, it is probably a case of waiting to see whether a trend establishes itself.

John Taylor of Information Builders was speaking in the room that would next host my session and so I was able to catch the last 15 minutes of his presentation on Information Management, which seemed to have been well-attended and well-received.
Measuring the benefits of BI

My presentation occupied the graveyard slot of 4:30pm and I led by saying that I fully realised that all that stood between delegates and the drinks reception was my talk. Given the lateness of the hour, I had been a little concerned about attendance, but I guess that there were at least 50 or so people present. All of them stuck it out to the bitter end, which was gratifying.

There is always the moment of frisson in public speaking when, at the end of the talk, you ask whether are any questions with an image of tumbleweed spinning across the prairie in your mind (something that happened to me on one previous occasion a long time ago). Thankfully the audience asked a number of interesting and insightful questions, which I answered to the best of my ability. Indeed I was locked in discussions with a couple of delegates long after the meeting had officially broken up.

Measuring the success of BI - Agenda

In my introduction, I began by issuing my customary caveat about the danger of too blindly following any recipe for success. I then provided some background about my first major achievement in data warehousing and went on to present the general framework for success in BI/DW programmes that I developed as a result of this. In concluding the first part of the speech, I attempted to delineate the main benefits of BI and also touched on some of its limitations.

Having laid these hopefully substantial foundations, the meat of the presentation expanded on ideas I briefly touched on in my earlier article Measuring the Benefits of Business Intelligence. This included highlighting some of the reasons why measuring the impact of BI on, say, profitability can be a challenge, but stressing that this was still often an objective that it was possible to achieve. I also spent some time examining in detail different techniques for quantifying the different tangible and intangible impacts of BI (most of which are covered in the above referenced article).

A sporting analogy by the back-door - England's victory in the 2003 Rugby World Cup, which was clearly inspired by the successful launch of the first phase of the EMIR BI/DW system at Chubb Insurance earlier in the year

My closing thought was that, in situations where it is difficult to precisely assess the monetary impact of BI, the wholehearted endorsement of your business customers can be a the best indirect measurement of the success (or otherwise) of your work. I would recommend that fellow BI professionals pay close attention to this important indicator at all stages of their projects.

You can view some of the tweets about IRM(UK) DW/BI here, or here.
Disclosure: At the time of writing, System Dynamics is a business partner, but not in the field of business intelligence.

Another social media-inspired meeting

Lights, camera, action!

Back in June 2009, I wrote an article entitled A first for me. In this I described meeting up with Seth Grimes (@SethGrimes), an acknowledged expert in analytics and someone I had initially “met” via

I have vastly expanded my network of international contacts through social media interactions such as these. Indeed I am slated to meet up with a few other people during November; a month in which I have a couple of slots speaking at BI/DW conferences (IRM later this week and Obis Omni towards the end of the month).

Another person that I became a virtual acquaintance of via social media is Bruna Aziza (@brunoaziza), Worldwide Strategy Lead for Business Intelligence at Microsoft. I originally “met” Bruno via and then also connected on Later Bruno asked me for my thoughts on his article, Use Business Intelligence To Compete More Effectively, and I turned these into a blog post called BI and competition. - by Bruno Aziza of Microsoft

We have kept in touch since and last week Bruno asked me to be interviewed on the channel that he is setting up. It was good to meet in person and I thought that we had some interesting discussions. Though I have done video and audio interviews before with organisations like IBM Cognos, Informatica, Computing Magazine and SmartDataCollective (see the foot of this article for links), these were mostly a while back and so it was interesting to be in front of a camera again.

The format seems to be an interesting one, with key points in BI discussed in a focussed and punchy manner (not an approach that I am generally associated with) and a target audience of busy senior IT managers. As I have remarked elsewhere, it is also notable that the more foresighted of corporations are now taking social media seriously and getting quite good at engaging without any trace of hard selling; something that perhaps compromised the earlier efforts of some organisations in this area (for the avoidance of doubt, this is a general comment and not one levelled at Microsoft).

Bruno and I touched on a number of areas including, driving improvements in data quality, measuring the value of BI programmes, using historical data to justify BI investments (something that I am overdue writing about – UPDATE: now remedied here) and the cultural change aspect of BI. I am looking forward to seeing the results. Watch this space and in the meantime, take a look at some of the earlier interviews that Bruno has conducted.


Other video and audio interviews that I have recorded: