Toast

Acrylamide [borrowed from Wikipedia]

Foreword

This blog touches on a wide range of topics, including social media, cultural transformation, general technology and – last but not least – sporting analogies. However, its primary focus has always been on data and information-centric matters in a business context. Having said this, all but the more cursory of readers will have noted the prevalence of pieces with a Mathematical or Scientific bent. To some extent this is a simple reflection of the author’s interests and experience, but a stronger motivation is often to apply learnings from different fields to the business data arena. This article is probably more scientific in subject matter than most, but I will also look to highlight some points pertinent to commerce towards the end.
 
 
Introduction

In Science We Trust?

The topic I want to turn my attention to in this article is public trust in science. This is a subject that has consumed many column inches in recent years. One particular area of focus has been climate science, which, for fairly obvious political reasons, has come in for even more attention than other scientific disciplines of late. It would be distracting to get into the arguments about climate change and humanity’s role in it here [1] and in a sense this is just the latest in a long line of controversies that have somehow become attached to science. An obvious second example here is the misinformation circling around both the efficacy and side effects of vaccinations [2]. In both of these cases, it seems that at least a sizeable minority of people are willing to query well-supported scientific findings. In some ways, this is perhaps linked to the general mistrust of “experts” and “elites” [3] that was explicitly to the fore in the UK’s European Union Referendum debate [4].

“People in this country have had enough of experts”

– Michael Gove [5], at this point UK Justice Secretary and one of the main proponents of the Leave campaign, speaking on Sky News, June 2016.

Mr Gove was talking about economists who held a different point of view to his own. However, his statement has wider resonance and cannot be simply dismissed as the misleading sound-bite of an experienced politician seeking to press his own case. It does indeed appear that in many places around the world experts are trusted much less than they used to be and that includes scientists.

“Many political upheavals of recent years, such as the rise of populist parties in Europe, Donald Trump’s nomination for the American presidency and Britain’s vote to leave the EU, have been attributed to a revolt against existing elites.”

The Buttonwood column, The Economist, September 2016.

Why has this come to be?
 
 
A Brief [6] History of the Public Perception of Science

Public Perception

Note: This section is focussed on historical developments in the public’s trust in science. If the reader would like to skip on to more toast-centric content, then please click here.

Answering questions about the erosion of trust in politicians and the media is beyond the scope of this humble blog. Wondering what has happened to trust in science is firmly in its crosshairs. One part of the answer is that – for some time – scientists were held in too much esteem and the pendulum was inevitably going to swing back the other way. For a while the pace of scientific progress and the miracles of technology which this unleashed placed science on a pedestal from which there was only one direction of travel. During this period in which science was – in general – uncritically held in great regard, the messy reality of actual science was never really highlighted. The very phrase “scientific facts” is actually something of an oxymoron. What we have is instead scientific theories. Useful theories are consistent with existing observations and predict new phenomena. However – as I explained in Patterns patterns everywhere – a theory is only as good as the latest set of evidence and some cherished scientific theories have been shown to be inaccurate; either in general, or in some specific circumstances [7]. However saying “we have a good model that helps us explain many aspects of a phenomenon and predict more, but it doesn’t cover everything and there are some uncertainties” is a little more of a mouthful than “we have discovered that…”.

There have been some obvious landmarks along the way to science’s current predicament. The unprecedented destruction unleashed by the team working on the Manhattan Project at first made the scientists involved appear God-like. It also seemed to suggest that the path to Great Power status was through growing or acquiring the best Physicists. However, as the prolonged misery caused in Japan by the twin nuclear strikes became more apparent and as the Cold War led to generations living under the threat of mutually assured destruction, the standing attached by the general public to Physicists began to wane; the God-like mantle began to slip. While much of our modern world and its technology was created off the back of now fairly old theories like Quantum Chromodynamics and – most famously – Special and General Relativity, the actual science involved became less and less accessible to the man or woman in the street. For all the (entirely justified) furore about the detection of the Higgs Boson, few people would be able to explain much about what it is and how it fits into the Standard Model of particle physics.

In the area of medicine and pharmacology, the Thalidomide tragedy, where a drug prescribed to help pregnant women suffering from morning sickness instead led to terrible birth defects in more than 10,000 babies, may have led to more stringent clinical trials, but also punctured the air of certainty that had surrounded the development of the latest miracle drug. While medical science and related disciplines have vastly improved the health of much of the globe, the glacial progress in areas such as oncology has served as a reminder of the fallibility of some scientific endeavours. In a small way, the technical achievements of that apogee of engineering, NASA, were undermined by loss of crafts and astronauts. Most notably the Challenger and Columbia fatalities served to further remove the glossy veneer that science had acquired in the 1940s to 1960s.

Lest it be thought at this point that I am decrying science, or even being anti-scientific, nothing could be further from the truth. I firmly believe that the ever growing body of scientific knowledge is one of humankind’s greatest achievements, if not its greatest. From our unpromising vantage point on an unremarkable little planet in our equally common-all-garden galaxy we have been able to grasp many of the essential truths about the whole Universe from the incomprehensibly gigantic to the most infinitesimal constituent of a sub-atomic particle. However, it seems that many people do not fully embrace the grandeur of our achievements, or indeed in many cases the unexpected beauty and harmony that they have revealed [8]. It is to the task of understanding this viewpoint that I am addressing my thoughts.

More recently, the austerity that has enveloped much of the developed world since the 2008 Financial Crisis has had two reinforcing impacts on science in many countries. First funding has often been cut, leading to pressure on research programmes and scientists increasingly having to make an economic case for their activities; a far cry from the 1950s. Second, income has been effectively stagnant for the vast majority of people, this means that scientific expenditure can seem something of a luxury and also fuels the anti-elite feelings cited by The Economist earlier in this article.

Anita Makri

Into this seeming morass steps Anita Makri, “editor/writer/producer and former research scientist”. In a recent Nature article she argues that the form of science communicated in popular media leaves the public vulnerable to false certainty. I reproduce some of her comments here:

“Much of the science that the public knows about and admires imparts a sense of wonder and fun about the world, or answers big existential questions. It’s in the popularization of physics through the television programmes of physicist Brian Cox and in articles about new fossils and quirky animal behaviour on the websites of newspapers. It is sellable and familiar science: rooted in hypothesis testing, experiments and discovery.

Although this science has its place, it leaves the public […] with a different, outdated view to that of scientists of what constitutes science. People expect science to offer authoritative conclusions that correspond to the deterministic model. When there’s incomplete information, imperfect knowledge or changing advice — all part and parcel of science — its authority seems to be undermined. […] A popular conclusion of that shifting scientific ground is that experts don’t know what they’re talking about.”

– Anita Makri, Give the public the tools to trust scientists, Nature, January 2017.

I’ll come back to Anita’s article again later.
 
 
Food Safety – The Dangers Lurking in Toast

Food Safety

After my speculations about the reasons why science is held in less esteem than once was the case, I’ll return to more prosaic matters; namely food and specifically that humble staple of many a breakfast table, toast. Food science has often fared no better than its brother disciplines. The scientific guidance issued to people wanting to eat healthily can sometimes seem to gyrate wildly. For many years fat was the source of all evil, more recently sugar has become public enemy number one. Red wine was meant to have beneficial effects on heart health, then it was meant to be injurious; I’m not quite sure what the current advice consists of. As Makri states above, when advice changes as dramatically as it can do in food science, people must begin to wonder whether the scientists really know anything at all.

So where does toast fit in? Well the governmental body charged with providing advice about food in the UK is called the Food Standards Agency. They describe their job as “using our expertise and influence so that people can trust that the food they buy and eat is safe and honest.” While the FSA do sterling work in areas such as publicly providing ratings of food hygiene for restaurants and the like, their most recent campaign is one which seems at best ill-advised and at worst another nail in the public perception of the reliability of scientific advice. Such things matter because they contribute to the way that people view science in general. If scientific advice about food is seen as unsound, surely there must be questions around scientific advice about climate change, or vaccinations.

Before I am accused of belittling the FSA’s efforts, let’s consider the campaign in question, which is called Go for Gold and encourages people to consume less acrylamide. Here is some of what the FSA has to say about the matter:

“Today, the Food Standards Agency (FSA) is launching a campaign to ‘Go for Gold’, helping people understand how to minimise exposure to a possible carcinogen called acrylamide when cooking at home.

Acrylamide is a chemical that is created when many foods, particularly starchy foods like potatoes and bread, are cooked for long periods at high temperatures, such as when baking, frying, grilling, toasting and roasting. The scientific consensus is that acrylamide has the potential to cause cancer in humans.

[…]

as a general rule of thumb, aim for a golden yellow colour or lighter when frying, baking, toasting or roasting starchy foods like potatoes, root vegetables and bread.”

– Food Standards Agency, Families urged to ‘Go for Gold’ to reduce acrylamide consumption, January 2017.

The Go for Gold campaign was picked up by various media outlets in the UK. For example the BBC posted an article on its web-site which opened by saying:

Dangerous Toast [borrowed from the BBC]

“Bread, chips and potatoes should be cooked to a golden yellow colour, rather than brown, to reduce our intake of a chemical which could cause cancer, government food scientists are warning.”

– BBC, Browned toast and potatoes are ‘potential cancer risk’, say food scientists, January 2017.

The BBC has been obsessed with neutrality on all subjects recently [9], but in this case they did insert the reasonable counterpoint that:

“However, Cancer Research UK [10] said the link was not proven in humans.”

Acrylamide is certainly a nasty chemical. Amongst other things, it is used in polyacrylamide gel electrophoresis, a technique used in biochemistry. If biochemists mix and pour their own gels, they have to monitor their exposure and there are time-based and lifetime limits as to how often they can do such procedures [11]. Acrylamide has also been shown to lead to cancer in mice. So what could be more reasonable that the FSA’s advice?
 
 
Food Safety – A Statistical / Risk Based Approach

David Spiegelhalter

Earlier I introduced Anita Makri, it is time to meet our second protagonist, David Spiegelhalter, Winton Professor for the Public Understanding of Risk in the Statistical Laboratory, Centre for Mathematical Sciences, University of Cambridge [12]. Professor Spiegelhalter has penned a response to the FSA’s Go for Gold campaign. I feel that this merits reading in entirety, but here are some highlights:

“Very high doses [of Acrylamide] have been shown to increase the risk of mice getting cancer. The IARC (International Agency for Research on Cancer) considers it a ‘probable human carcinogen’, putting it in the same category as many chemicals, red meat, being a hairdresser and shift-work.

However, there is no good evidence of harm from humans consuming acrylamide in their diet: Cancer Research UK say that ‘At the moment, there is no strong evidence linking acrylamide and cancer.’

This is not for want of trying. A massive report from the European Food Standards Agency (EFSA) lists 16 studies and 36 publications, but concludes

  ‘In the epidemiological studies available to date, AA intake was not associated with an increased risk of most common cancers, including those of the GI or respiratory tract, breast, prostate and bladder. A few studies suggested an increased risk for renal cell, and endometrial (in particular in never-smokers) and ovarian cancer, but the evidence is limited and inconsistent. Moreover, one study suggested a lower survival in non-smoking women with breast cancer with a high pre-diagnostic exposure to AA but more studies are necessary to confirm this result. (p185)’

[…]

[Based on the EFSA study] adults with the highest consumption of acrylamide could consume 160 times as much and still only be at a level that toxicologists think unlikely to cause increased tumours in mice.

[…]

This all seems rather reassuring, and may explain why it’s been so difficult to observe any effect of acrylamide in diet.”

– David Spiegelhalter, Opinion: How dangerous is burnt toast?, University of Cambridge, January 2017.

Indeed, Professor Spiegelhalter, an esteemed statistician, also points out that most studies will adopt the standard criteria for statistical significance. Given that such significance levels are often set at 5%, then this means that:

“[As] each study is testing an association with a long list of cancers […], we would expect 1 in 20 of these associations to be positive by chance alone.”

He closes his article by stating – not unreasonably – that the FSA’s time and attention might be better spent on areas where causality between an agent and morbidity is well-established, for example obesity. My assumption is that the FSA has a limited budget and has to pick and choose what food issues to weigh in on. Even if we accept for the moment that there is some slight chance of a causal link between the consumption of low levels of acrylamide and cancer, there are plenty of other areas in which causality is firmly established; obesity as mentioned by Professor Spiegelhalter, excessive use of alcohol, even basic kitchen hygiene. It is hard to understand why the FSA did not put more effort into these and instead focussed on an area where the balance of scientific judgement is that there is unlikely to be an issue.

Having a mathematical background perhaps biases me, but I tend to side with Professor Spiegelhalter’s point of view. I don’t want to lay the entire blame for the poor view that some people have of science at the FSA’s door, but I don’t think campaigns like Go for Gold help very much either. The apocryphal rational man or woman will probably deduce that there is not an epidemic of acrylamide poisoning in progress. This means that they may question what the experts at the FSA are going on about. In turn this reduces respect for other – perhaps more urgent – warnings about food and drink. Such a reaction is also likely to colour how the same rational person thinks about “expert” advice in general. All of this can contribute to further cracks appearing in the public edifice of science, an outcome I find very unfortunate.

So what is to be done?
 
 
A Call for a New and More Honest Approach to Science Communications

Honesty is the Best Policy

As promised I’ll return to Anita Makri’s thoughts in the same article referenced above:

“It’s more difficult to talk about science that’s inconclusive, ambivalent, incremental and even political — it requires a shift in thinking and it does carry risks. If not communicated carefully, the idea that scientists sometimes ‘don’t know’ can open the door to those who want to contest evidence.

[…]

Scientists can influence what’s being presented by articulating how this kind of science works when they talk to journalists, or when they advise on policy and communication projects. It’s difficult to do, because it challenges the position of science as a singular guide to decision making, and because it involves owning up to not having all of the answers all the time while still maintaining a sense of authority. But done carefully, transparency will help more than harm. It will aid the restoration of trust, and clarify the role of science as a guide.”

The scientific method is meant to be about honesty. You record what you see, not what you want to see. If the data don’t support your hypothesis, you discard or amend your hypothesis. The peer-review process is meant to hold scientists to the highest levels of integrity. What Makri seems to be suggesting is for scientists to turn their lenses on themselves and how they communicate their work. Being honest where there is doubt may be scary, but not as scary as being caught out pushing certainty where no certainty is currently to be had.
 


 
Epilogue

At the beginning of this article, I promised that I would bring things back to a business context. With lots of people with PhDs in numerate sciences now plying their trade as data scientists and the like, there is an attempt to make commerce more scientific [13]. Understandably, the average member of a company will have less of an appreciation of statistics and statistical methods than their data scientists do. This can lead to data science seeming like magic; the philosopher’s stone [14]. There are obvious parallels here with how Physicists were seen in the period immediately after the Second World War.

Earlier in the text, I mused about what factors may have led to a deterioration in how the public views science and scientists. I think that there is much to be learnt from the issues I have covered in this article. If data scientists begin to try to peddle absolute truth and perfect insight (both of which, it is fair to add, are often expected from them by non-experts), as opposed to ranges of outcomes and probabilities, then the same decline in reputation probably awaits them. Instead it would be better if data scientists heeded Anita Makri’s words and tried to always be honest about what they don’t know as well as what they do.
 


 
Notes

 
[1]
 
Save to note that there really is no argument in scientific circles.

As ever Randall Munroe makes the point pithily in his Earth Temperature Timeline – https://xkcd.com/1732/.

For a primer on the area, you could do worse than watching The Royal Society‘s video:

 
[2]
 
For the record, my daughter has had every vaccine known to the UK and US health systems and I’ve had a bunch of them recently as well.
 
[3]
 
Most scientists I know would be astonished that they are considered part of the amorphous, ill-defined and obviously malevolent global “elite”. Then “elite” is just one more proxy for “the other” something which it is not popular to be in various places in the world at present.
 
[4]
 
Or what passed for debate in these post-truth times.
 
[5]
 
Mr Gove studied English at Lady Margaret Hall, Oxford, where he was also President of the Oxford Union. Clearly Oxford produces less experts than it used to in previous eras.
 
[6]
 
One that is also probably wildly inaccurate and certainly incomplete.
 
[7]
 
So Newton’s celebrated theory of gravitation is “wrong” but actually works perfectly well in most circumstances. The the Rutherford–Bohr model, where atoms are little Solar Systems, with the nucleus circled by electrons much as the planets circle the Sun is “wrong”, but actually does serve to explain a number of things; if sadly not the orbital angular momentum of electrons.
 
[8]
 
Someone should really write a book about that – watch this space!
 
[9]
 
Not least in the aforementioned EU Referendum where it felt the need to follow the views of the vast majority of economists with those of the tiny minority, implying that the same weight be attached to both points of view. For example, 99.9999% of people believe the world to be round, but in the interests of balance my mate Jim reckons it is flat.
 
[10]
 
According to their web-site: “the world’s leading charity dedicated to beating cancer through research”.
 
[11]
 
As attested to personally by the only proper scientist in our family.
 
[12]
 
Unlike Oxford (according to Mr Gove anyway), Cambridge clearly still aspires to creating experts.
 
[13]
 
By this I mean proper science and not pseudo-science like management theory and the like.
 
[14]
 
In the original, non-J.K. Rowling sense of the phrase.

 

 

Metamorphosis

Metamorphosis

No neither my observations on the work of Kafka, nor that of Escher [1]. Instead some musings relating on how to transform a bare bones and unengaging chart into something that both captures the attention of the reader and better informs them of the message that the data displayed is relaying. Let’s consider an example:

Before:

Before

After:

After

The two images above are both renderings of the same dataset, which tracks the degree of fragmentation of the Israeli parliament – the Knesset – over time [2]. They are clearly rather different and – I would argue – the latter makes it a lot easier to absorb information and thus to draw inferences.

Boris Gorelik

Both are the work of Boris Gorelik a data scientist at Automattic, a company that is most well-known for creating freemium SAAS blogging platform, WordPress.com and open source blogging software, WordPress [3].

Data for breakfast

I have been a contented WordPress.com user since the inception of this blog back in November 2008, so it was with interest that I learnt that Automattic have their own data-focussed blog, Data for Breakfast, unsurprisingly hosted on WordPress.com. It was on Data for Breakfast that I found Boris’s article, Evolution of a Plot: Better Data Visualization, One Step at a Time. In this he takes the reader step by step through what he did to transform his data visualisation from the ugly duckling “before” exhibit to the beautiful swan “after” exhibit.

Boris is using Python and various related libraries to do his data visualisation work. Given that I stopped commercially programming sometime around 2009 (admittedly with a few lapses since), I typically use the much more quotidian Excel for most of the charts that appear on peterjamesthomas.com [4]. Sometimes, where warranted, I enhance these using Visio and / or PaintShop Pro.

For example, the three [5] visualisations featured in A Tale of Two [Brexit] Data Visualisations were produced this way. Despite the use of Calibri, which is probably something of a giveaway, I hope that none of these resembles a straight-out-of-the-box Excel graph [6].
 

Brexit Bar
UK Referendum on EU Membership – Percentage voting by age bracket (see notes)

 
Brexit Bar 2
UK Referendum on EU Membership – Numbers voting by age bracket (see notes)

 
Brexit Flag
UK Referendum on EU Membership – Number voting by age bracket (see notes)

 
While, in the above, I have not gone to the lengths that Boris has in transforming his initial and raw chart into something much more readable, I do my best to make my Excel charts look at least semi-professional. My reasoning is that, when the author of a chart has clearly put some effort into what their chart looks like and has at least attempted to consider how it will be read by people, then this is a strong signal that the subject matter merits some closer consideration.

Next time I develop a chart for posting on these pages, I may take Boris’s lead and also publish how I went about creating it.
 


 Notes

 
[1]
 
Though the latter’s work has adorned these pages on several occasions and indeed appears in my seminar decks.
 
[2]
 
Boris has charted a metric derived from how many parties there have been and how many representatives of each. See his article itself for further background.
 
[3]
 
You can learn more about the latter at WordPress.org.
 
[4]
 
Though I have also used GraphPad Prism for producing more scientific charts such as the main one featured in Data Visualisation – A Scientific Treatment.
 
[5]
 
Yes I can count. I have certificates which prove this.
 
[6]
 
Indeed the final one was designed to resemble a fractured British flag. I’ll leave readers to draw their own conclusions here.

 

 

Curiouser and Curiouser – The Limits of Brexit Voting Analysis

An original illustration from Charles Lutwidge Dodgson's seminal work would have been better, but sadly none such seems to be extant
 
Down the Rabbit-hole

When I posted my Brexit infographic reflecting the age of voters an obvious extension was to add an indication of the number of people in each age bracket who did not vote as well as those who did. This seemed a relatively straightforward task, but actually proved to be rather troublesome (this may be an example of British understatement). Maybe the caution I gave about statistical methods having a large impact on statistical outcomes in An Inconvenient Truth should have led me to expect such issues. In any case, I thought that it would be instructive to talk about the problems I stumbled across and to – once again – emphasise the perils of over-extending statistical models.

Brexit ages infographic
Click to download a larger PDF version in a new window.

Regular readers will recall that my Brexit Infographic (reproduced above) leveraged data from an earlier article, A Tale of two [Brexit] Data Visualisations. As cited in this article, the numbers used were from two sources:

  1. The UK Electoral Commission – I got the overall voting numbers from here.
  2. Lord Ashcroft’s Poling organisation – I got the estimated distribution of votes by age group from here.

In the notes section of A Tale of two [Brexit] Data Visualisations I [prophetically] stated that the breakdown of voting by age group was just an estimate. Based on what I have discovered since, I’m rather glad that I made this caveat explicit.
 
 
The Pool of Tears

In order to work out the number of people in each age bracket who did not vote, an obvious starting point would be the overall electorate, which the UK Electoral Commission stated as being 46,500,001. As we know that 33,551,983 people voted (an actual figure rather than an estimate), then this is where the turnout percentage of 72.2% (actually 72.1548%) came from (33,551,983 / 45,500,001).

A clarifying note, the electorate figures above refer to people who are eligible to vote. Specifically, in order to vote in the UK Referendum, people had to meet the following eligibility criteria (again drawn from the UK Electoral Commission):

To be eligible to vote in the EU Referendum, you must be:

  • A British or Irish citizen living in the UK, or
  • A Commonwealth citizen living in the UK who has leave to remain in the UK or who does not require leave to remain in the UK, or
  • A British citizen living overseas who has been registered to vote in the UK in the last 15 years, or
  • An Irish citizen living overseas who was born in Northern Ireland and who has been registered to vote in Northern Ireland in the last 15 years.

EU citizens are not eligible to vote in the EU Referendum unless they also meet the eligibility criteria above.

So far, so simple. The next thing I needed to know was how the electorate was split by age. This is where we begin to run into problems. One place to start is the actual population of the UK as at the last census (2011). This is as follows:
 

Ages (years) Population % of total
0–4 3,914,000 6.2
5–9 3,517,000 5.6
10–14 3,670,000 5.8
15–19 3,997,000 6.3
20–24 4,297,000 6.8
25–29 4,307,000 6.8
30–34 4,126,000 6.5
35–39 4,194,000 6.6
40–44 4,626,000 7.3
45–49 4,643,000 7.3
50–54 4,095,000 6.5
55–59 3,614,000 5.7
60–64 3,807,000 6.0
65–69 3,017,000 4.8
70–74 2,463,000 3.9
75–79 2,006,000 3.2
80–84 1,496,000 2.4
85–89 918,000 1.5
90+ 476,000 0.8
Total 63,183,000 100.0

 
If I roll up the above figures to create the same age groups as in the Ashcroft analysis (something that requires splitting the 15-19 range, which I have assumed can be done uniformly), I get:
 

Ages (years) Population % of total
0-17 13,499,200 21.4
18-24 5,895,800 9.3
25-34 8,433,000 13.3
35-44 8,820,000 14.0
45-54 8,738,000 13.8
55-64 7,421,000 11.7
65+ 10,376,000 16.4
Total 63,183,000 100.0

 
The UK Government isn’t interested in the views of people under 18[citation needed], so eliminating this row we get:
 

Ages (years) Population % of total
18-24 5,895,800 11.9
25-34 8,433,000 17.0
35-44 8,820,000 17.8
45-54 8,738,000 17.6
55-64 7,421,000 14.9
65+ 10,376,000 20.9
Total 49,683,800 100.0

 
As mentioned, the above figures are from 2011 and the UK population has grown since then. Web-site WorldOMeters offers an extrapolated population of 65,124,383 for the UK in 2016 (this is as at 12th July 2016; if extrapolation and estimates make you queasy, I’d suggest closing this article now!). I’m going to use a rounder figure of 65,125,000 people; there is no point pretending that precision exists where it clearly doesn’t. Making the assumption that such growth is uniform across all age groups (please refer to my previous bracketed comment!), then the above exhibit can also be extrapolated to give us:
 

Ages (years) Population % of total
18-24 6,077,014 11.9
25-34 8,692,198 17.0
35-44 9,091,093 17.8
45-54 9,006,572 17.6
55-64 7,649,093 14.9
65+ 10,694,918 20.9
Total 51,210,887 100.0

 
 
Looking Glass House

So our – somewhat fabricated – figure for the 18+ UK population in 2016 is 51,210,887, let’s just call this 51,200,000. As at the beginning of this article the electorate for the 2016 UK Referendum was 45,500,000 (dropping off the 1 person with apologies to him or her). The difference is explicable based on the eligibility criteria quoted above. I now have a rough age group break down of the 51.2 million population, how best to apply this to the 45.5 million electorate?

I’ll park this question for the moment and instead look to calculate a different figure. Based on the Ashcroft model, what percentage of the UK population (i.e. the 51.2 million) voted in each age group? We can work this one out without many complications as follows:
 

Ages (years)
 
Population
(A)
Voted
(B)
Turnout %
(B/A)
18-24 6,077,014 1,701,067 28.0
25-34 8,692,198 4,319,136 49.7
35-44 9,091,093 5,656,658 62.2
45-54 9,006,572 6,535,678 72.6
55-64 7,649,093 7,251,916 94.8
65+ 10,694,918 8,087,528 75.6
Total 51,210,887 33,551,983 65.5

(B) = Size of each age group in the Ashcroft sample as a percentage multiplied by the total number of people voting (see A Tale of two [Brexit] Data Visualisations).
 
Remember here that actual turnout figures have electorate as the denominator, not population. As the electorate is less than the population, this means that all of the turnout percentages should actually be higher than the ones calculated (e.g. the overall turnout with respect to electorate is 72.2% whereas my calculated turnout with respect to population is 65.5%). So given this, how to explain the 94.8% turnout of 55-64 year olds? To be sure this group does reliably turn out to vote, but did essentially all of them (remembering that the figures in the above table are too low) really vote in the referendum? This seems less than credible.

The turnout for 55-64 year olds in the 2015 General Election has been estimated at 77%, based on an overall turnout of 66.1% (web-site UK Political Info; once more these figures will have been created based on techniques similar to the ones I am using here). If we assume a uniform uplift across age ranges (that “assume” word again!) then one might deduce that an increase in overall turnout from 66.1% to 72.2%, might lead to the turnout in the 55-64 age bracket increasing from 77% to 84%. 84% turnout is still very high, but it is at least feasible; close to 100% turnout in from this age group seems beyond the realms of likelihood.

So what has gone wrong? Well so far the only culprit I can think of is the distribution of voting by age group in the Ashcroft poll. To be clear here, I’m not accusing Lord Ashcroft and his team of sloppy work. Instead I’m calling out that the way that I have extrapolated their figures may not be sustainable. Indeed, if my extrapolation is valid, this would imply that the Ashcroft model over estimated the proportion of 55-64 year olds voting. Thus it must have underestimated the proportion of voters in some other age group. Putting aside the likely fact that I have probably used their figures in an unintended manner, could it be that the much-maligned turnout of younger people has been misrepresented?

To test the validity of this hypothesis, I turned to a later poll by Omnium. To be sure this was based on a sample size of around 2,000 as opposed to Ashcroft’s 12,000, but it does paint a significantly different picture. Their distribution of voter turnout by age group was as follows:
 

Ages (years) Turnout %
18-24 64
25-39 65
40-54 66
55-64 74
65+ 90

 
I have to say that the Omnium age groups are a bit idiosyncratic, so I have taken advantage of the fact that the figures for 25-54 are essentially the same to create a schedule that matches the Ashcroft groups as follows:
 

Ages (years) Turnout %
18-24 64
25-34 65
35-44 65
45-54 65
55-64 74
65+ 90

 
The Omnium model suggests that younger voters may have turned out in greater numbers than might be thought based on the Ashcroft data. In turn this would suggest that a much greater percentage of 18-24 year olds turned out for the Referendum (64%) than for the last General Election (43%); contrast this with an estimated 18-24 turnout figure of 47% based on the just increase in turnout between the General Election and the Referendum. The Omnium estimates do still however recognise that turnout was still greater in the 55+ brackets, which supports the pattern seen in other elections.
 
 
Humpty Dumpty

While it may well be that the Leave / Remain splits based on the Ashcroft figures are reasonable, I’m less convinced that extrapolating these same figures to make claims about actual voting numbers by age group (as I have done) is tenable. Perhaps it would be better to view each age cohort as a mini sample to be treated independently. Based on the analysis above, I doubt that the turnout figures I have extrapolated from the Ashcroft breakdown by age group are robust. However, that is not the same as saying that the Ashcroft data is flawed, or that the Omnium figures are correct. Indeed the Omnium data (at least those elements published on their web-site) don’t include an analysis of whether the people in their sample voted Leave or Remain, so direct comparison is not going to be possible. Performing calculation gymnastics such as using the Omnium turnout for each age group in combination with the Ashcroft voting splits for Leave and Remain for the same age groups actually leads to a rather different Referendum result, so I’m not going to plunge further down this particular rabbit hole.

In summary, my supposedly simple trip to the destitution of an enhanced Brexit Infographic has proved unexpectedly arduous, winding and beset by troubles. These challenges have proved so great that I’ve abandoned the journey and will be instead heading for home.
 
 
Which dreamed it?

Based on my work so far, I have severe doubts about the accuracy of some of the age-based exhibits I have published (versions of which have also appeared on many web-sites, the BBC to offer just one example, scroll down to “How different age groups voted” and note that the percentages cited reconcile to mine). I believe that my logic and calculations are sound, but it seems that I am making too many assumptions about how I can leverage the Ashcroft data. After posting this article, I will accordingly go back and annotate each of my previous posts and link them to these later findings.

I think the broader lesson to be learnt is that estimates are just that, attempts (normally well-intentioned of course) to come up with figures where the actual numbers are not accessible. Sometimes this is a very useful – indeed indispensable – approach, sometimes it is less helpful. In either case estimation should always be approached with caution and the findings ideally sense-checked in the way that I have tried to do above.

Occam’s razor would suggest that when the stats tell you something that seems incredible, then 99 times out of 100 there is an error or inaccurate assumption buried somewhere in the model. This applies when you are creating the model yourself and doubly so where you are relying upon figures calculated by other people. In the latter case not only is there the risk of their figures being inaccurate, there is the incremental risk that you interpret them wrongly, or stretch their broader application to breaking point. I was probably guilty of one or more of the above sins in my earlier articles. I’d like my probable misstep to serve as a warning to other people when they too look to leverage statistics in new ways.

A further point is the most advanced concepts I have applied in my calculations above are addition, subtraction, multiplication and division. If these basic operations – even in the hands of someone like me who is relatively familiar with them – can lead to the issues described above, just imagine what could result from the more complex mathematical techniques (e.g. ambition, distraction, uglification and derision) used by even entry-level data scientists. This perhaps suggests an apt aphorism: Caveat calculator!

Beware the Jabberwock, my son! // The jaws that bite, the claws that catch! // Beware the Jubjub bird, and shun // The frumious Bandersnatch!

 

5 More Themes from a Chief Data Officer Forum

A rather famous theme

This article is the second of two pieces reflecting on the emerging role of the Chief Data Officer. Each article covers 5 themes. You can read the first five themes here.

As with the first article, I would like to thank both Peter Aiken, who reviewed a first draft of this piece and provided useful clarifications and additional insights, and several of my fellow delegates, who also made helpful suggestions around the text. Again any errors of course remain my responsibility.
 
 
Introduction Redux

After reviewing a draft of the first article in this series and also scanning an outline of this piece, one of the other attendees at the inaugural IRM(UK) / DAMA CDO Executive Forum rightly highlighted that I had not really emphasised the strategic aspects of the CDO’s work; both data / information strategy and the close linkage to business strategy. I think the reason for this is that I spend so much of my time on strategic work that I’ve internalised the area. However, I’ve come to the not unreasonable conclusion that internalisation doesn’t work so well on a blog, so I will call out this area up-front (as well as touching on it again in Theme 10 below).

For more of my views on strategy formation in the data / information space please see my trilogy of articles starting with: Forming an Information Strategy: Part I – General Strategy.

With that said, I’ll pick up where we left off with the themes that arose in the meeting: 
 
Theme 6 – While some CDO roles have their genesis in risk mitigation, most are focussed on growth

Epidermal growth factor receptor

This theme gets to the CDO / CAO debate (which I will be writing about soon). It is true that the often poor state of data governance in organisations is one reason why the CDO role has emerged and also that a lot of CDO focus is inevitably on this area. The regulatory hurdles faced by many industries (e.g. Solvency II in my current area of Insurance) also bring a significant focus on compliance to the CDO role. However, in the unanimous view of the delegates, while cleaning the Augean Stables is important and equally organisations which fail to comply with regulatory requirements tend to have poor prospects, most CDOs have a growth-focussed agenda. Their primary objective is to leverage data (or to facilitate its leverage) to drive growth and open up new opportunities. Of course good data management is a prerequisite for achieving this objective in a sustainable manner, but it is not an end in itself. Any CDO who allows themself to be overwhelmed by what should just be part of their role is probably heading in the same direction as a non-compliant company.
 
 
Theme 7 – New paradigms are data / analytics-centric not application-centric

Applications & Data

Historically, technology landscapes used to be application-centric. Often there would be a cluster of systems in the centre (ideally integrated with each other in some way) and each with their own analytics capabilities; a CRM system with customer analytics “out-of-the-box” (whatever that really means in practice), an ERP system with finance analytics and maybe supply-chain analytics, digital estates with web analytics and so on. Even if there was a single-central system (those of us old enough will still remember the ERP vision), then this would tend to have various analytical repositories around it used by different parts of the organisation for different purposes. Equally some of the enterprise data warehouses I have built have included specialist analytical repositories, e.g. to support pricing, or risk, or other areas.

Today a new paradigm is emerging. Under this, rather than being at the periphery, data and analytics are in the centre, operating in a more joined-up manner. Many companies have already banked the automation and standardisation benefits of technology and are now looking instead to exploit the (often considerably larger) information and insight benefits [1]. This places information and insight assets at the centre of the landscape. It also means that finally information needs can start to drive system design and selection, not the other way round.
 
 
Theme 8 – Data and Information need to be managed together

Data and Information in harness

We see a further parallel with the CAO vs CDO debate here [2]. After 27 years with at least one foot in IT (though often in hybrid roles with dual business / IT reporting) and 15 explicitly in the data and information space, I really fail to see how data and information are anything other than two sides of the same coin.

To people who say that the CAO is the one who really understands the business and the CDO worries instead about back-end data governance, I would reply that an engine is only as good as the fuel that you put into it. I’d over-extend the analogy (as is my wont [3]) by saying that the best engineers will have a thorough understanding of:

  1. what purpose the engine will be applied to – racing car, or lorry (truck)
  2. the parameters within which it is required to perform
  3. the actual performance requirements
  4. what that means in terms of designing the engine
  5. what inputs the engine will have: petrol/diesel/bio-fuel/electricity
  6. what outputs it will produce (with no reference to poor old Volkswagen intended)

It may be that the engineering team has experts in various areas from metallurgy, to electronics, to chemistry, to machining, to quality control, to noise and vibration suppression, to safety, to general materials science and that these are required to work together. But whoever is in charge of overall design, and indeed overall production, would need to have knowledge spanning all these areas and would in addition need to ensure that specialists under their supervision worked harmoniously together to get the best result.

Data is the basic building block of information. Information is the embodiment of things that people want or need to know. You cannot generate information (let alone insight) without a very strong understanding of data. You can neither govern, nor exploit, data in any useful way without knowledge of the uses to which it will be put. Like the chief product engineer, there is a need for someone who understands all of the elements, all of the experts working on these and can bring them together just as harmoniously [4]).
 
 
Theme 9 – Data Science is not enough

If you don't understand  the notation, you've failed in your application to be a  Data Scientist

In Part One of this article I repeated an assertion about the typical productivity of data scientists:

“Data Scientists are only 10-20% productive; if you start a week-long piece of work on Monday, the actual statistical analysis will commence on Friday afternoon; the rest of the time is battling with the data”

While the many data scientists I know would attest to the truth of this, there is a broader point to be made. That is the need for what can be described as Data Interpreters. This role is complementary to the data science community, acting as an interface between those with PhDs in statistics and the rest of the world. At IRM(UK) ED&BI one speaker even went so far as to present a photo graph of two ladies who filled these ying and yang roles at a European organisation.

More broadly, the advent of data science, while welcome, has not obviated the need to pass from data through information to get to insight for most of an organisation’s normal measurements. Of course an ability to go straight from data to insight is also a valuable tool, but it is not suitable for all situations. There are also a number of things to be aware of before uncritically placing full reliance on statistical models [5].
 
 
Theme 10 – Information is often a missing link between Business and IT strategies

Business => Information => IT

This was one of the most interesting topics of discussion at the forum and we devoted substantial time to exploring issues and opportunities in this area. The general sense was that – as all agreed – IT strategy needs to be aligned with business strategy [6]. However, there was also agreement that this can be hard and in many ways is getting harder. With IT leaders nowadays often consumed by the need to stay abreast of both technology opportunities (e.g. cloud computing) and technology threats (e.g. cyber crime) as well as inevitably having both extensive business as usual responsibilities and significant technology transformation programmes to run, it could be argued that some IT departments are drifting away from their business partners; not through any desire to do so, but just because of the nature (and volume) of current work. Equally with the increasing pace of business change, few non-IT executives can spend as much time understanding the role of technology as was once perhaps the case.

Given that successful information work must have a foot in both the business and technology camps (“what do we want to do with our data?” and “what data do we have available to work with?” being just two pertinent questions), the argument here was that an information strategy can help to build a bridge these two increasingly different worlds. Of course this chimes with the feedback on the primacy of strategy that I got on my earlier article from another delegate; and which I reference at the beginning of this piece. It also is consistent with my own view that the data → information → insight → action journey is becoming an increasingly business-focused one.

A couple of CDO Forum delegates had already been thinking about this area and went so far as to present models pertaining to a potential linkage, which they had either created or adapted from academic journals. These placed information between business and IT pillars not just with respect to strategy but also architecture and implementation. This is a very interesting area and one which I hope to return to in coming weeks.
 
 
Concluding thoughts

As I mentioned in Part One, the CDO Forum was an extremely useful and thought-provoking event. One thing which was of note is that – despite the delegates coming from many different backgrounds, something which one might assume would be a barrier to effective communication – they shared a common language, many values and comparable views on how to take the areas of data management and data exploitation forward. While of course delegates at an such an eponymous Forum might be expected to emphasise the importance of their position, it was illuminating to learn just how seriously a variety organisations were taking the CDO role and that CDOs were increasingly becoming agents of growth rather than just risk and compliance tsars.

Amongst the many other themes captured in this piece and its predecessor, perhaps a stand-out was how many organisations view the CDO as a firmly commercial / strategic role. This can only be a positive development and my hope is that CDOs can begin to help organisations to better understand the asset that their data represents and then start the process of leveraging this to unlock its substantial, but often latent, business value.
 


 
Notes

 
[1]
 
See Measuring the benefits of Business Intelligence
 
[2]
 
Someone really ought to write an article about that!

UPDATE: They now have in: The Chief Data Officer “Sweet Spot” and Alphabet Soup

 
[3]
 
See Analogies for some further examples as well as some of the pitfalls inherent in such an approach.
 
[4]
 
I cover this duality in many places in this blog, for the reader who would like to learn more about my perspectives on the area, A bad workman blames his [Business Intelligence] tools is probably a good place to start; this links to various other resources on this site.
 
[5]
 
I cover some of these here, including (in reverse chronological order):

 
[6]
 
I tend to be allergic to the IT / Business schism as per: Business is from Mars and IT is from Venus (incidentally the first substantive article on I wrote for this site), but at least it serves some purpose in this discussion, rather than leading to unproductive “them and us” syndrome, that is sadly all to often the outcome.

 

 

An Inconvenient Truth

Frequentists vs. Bayesians - © xkcd.com
© xkcd.com (adapted from the original to fit the dimensions of this page)

No, not a polemic about climate change, but instead some observations on the influence of statistical methods on statistical findings. It is clearly a truism to state that there are multiple ways to skin a cat, what is perhaps less well-understood is that not all methods of flaying will end up with a cutaneously-challenged feline and some may result in something altogether different.

So an opaque introduction, let me try to shed some light instead. While the points I am going to make here are ones that any statistical practitioner would (or certainly should) know well, they are perhaps less widely appreciated by a general audience. I returned to thinking about this area based on an article by Raphael Silberzahn and Eric Uhlmann in Nature [1], but one which I have to admit first came to my attention via The Economist [2].

Messrs Silberzahn and Uhlmann were propounding a crowd-sourced approach to statistical analysis in science, in particular the exchange of ideas about a given analysis between (potentially rival) groups before conclusions are reached and long before the customary pre- and post-publication reviews. While this idea may well have a lot of merit, I’m instead going to focus on the experiment that the authors performed, some of its results and their implications for more business-focussed analysis teams and individuals.

The interesting idea here was that Silberzahn and Uhlmann provided 29 different teams of researchers the same data set and asked them to investigate the same question. The data set was a sporting one covering the number of times that footballers (association in this case, not American) were dismissed from the field of play by an official. The data set included many attributes from the role of the player, to when the same player / official encountered each other, to demographics of the players themselves. The question was – do players with darker skins get dismissed more often than their fairer teammates?

Leaving aside the socio-political aspects that this problem brings to mind, the question is one that, at least on first glance, looks as if it should be readily susceptible to statistical analysis and indeed the various researchers began to develop their models and tests. A variety of methodologies was employed, “everything from Bayesian clustering to logistic regression and linear modelling” (the authors catalogued the approaches as well as the results) and clearly each team took decisions as to which data attributes were the most significant and how their analyses would be parameterised. Silberzahn and Uhlmann then compared the results.

Below I’ll simply repeat part of their comments (with my highlighting):

Of the 29 teams, 20 found a statistically significant correlation between skin colour and red cards […]. The median result was that dark-skinned players were 1.3 times more likely than light-skinned players to receive red cards. But findings varied enormously, from a slight (and non-significant) tendency for referees to give more red cards to light-skinned players to a strong trend of giving more red cards to dark-skinned players.

This diversity in findings is neatly summarised in the following graph (please click to view the original on Nature’s site):

Nature Graph

© NPG. Used under license 3741390447060 Copyright Clearance Center

To be clear here, the unanimity of findings that one might have expected from analysing what is essentially a pretty robust and conceptually simple data set was essentially absent. What does this mean aside from potentially explaining some of the issues with repeatability that have plagued some parts of science in recent years?

Well the central observation is that precisely the same data set can lead to wildly different insights dependent on how it is analysed. It is not necessarily the case that one method is right and others wrong, indeed in review of the experiment, the various research teams agreed that the approaches taken by others were also valid. Instead it is extremely difficult to disentangle results from the algorithms employed to derive them. In this case methodology had a bigger impact on findings than any message lying hidden in the data.

Here we are talking about leading scientific researchers, whose prowess in statistics is a core competency. Let’s now return to the more quotidian world of the humble data scientist engaged in helping an organisation to take better decisions through statistical modelling. Well the same observations apply. In many cases, insight will be strongly correlated with how the analysis is performed and the choices that the analyst has made. Also, it may not be that there is some objective truth hidden in a dataset, instead only a variety of interpretations of this.

Now this sounds like a call to abandon all statistical models. Nothing could be further from my point of view [3]. However caution is required. In particular those senior business people who place reliance on the output of models, but who maybe do not have a background in statistics, should perhaps ask themselves whether what their organisation’s models tell them is absolute truth, or instead simply more of an indication. They should also ask whether a different analysis methodology might have yielded a different result and thus dictated different business action.

At the risk of coming over all Marvel, the great power of statistical modelling comes with great responsibility.

In 27 years in general IT and 15 in the data/information space (to say nothing of my earlier Mathematical background) I have not yet come across a silver bullet. My strong suspicion is that they don’t exist. However, I’d need to carry out some further analysis to reach a definitive conclusion; now what methodology to employ…?
 


 
Notes

 
[1]
 
Crowdsourced research: Many hands make tight work. Raphael Silberzahn &a Eric L. Uhlmann. Nature. 07 October 2015.
07 October 2015
 
[2]
 
On the other hands – Honest disagreement about methods may explain irreproducible results.The Economist 10th Oct 2015.
 
[3]
 
See the final part of my trilogy on using historical data to justify BI investments for a better representation of my actual views.

Wanted – Chief Data Officer

Your organisation's data wants you

My updates here have been notable mostly for their infrequency in recent years. Indeed, in the period since my last substantive piece (Forming an Information Strategy: Part III – Completing the Strategy), I have had time to become a father again; a very welcome event but undeniably one which is not the most conducive to blogging.

Readers who may recall a more prolific period in my writing on this site will also probably remember that I have had a long association with the information-centric seminars run by IRM(UK). They have been kind enough to ask me to present three times at their Data Warehousing / Business Intelligence (DW/BI) events and once at their Master Data Management / Data Governance (MDM/DG) one.

Enterprise Data & BI 2015

In a sign of the times, IRM DW/BI has now morphed into the IRM Enterprise Data / Business Intelligence (ED/BI) seminar. I will be returning this week, not to present, but to form part of a panel discussing “Beyond Big Data, Delivering Real Time Actionable Business Intelligence to Your Organisation”. This panel will be chaired by Mike Simons, associate editor of a number of IDG organs such as CIO.com and ComputerWorldUK.com.

However, plugging this seminar is not my main reason for putting fingertip to keyboard today. The last few years has seen the rise of a new member of the CxO pantheon, the Chief Data Office (or CDO). It is a toss-up whether this role, or that of Data Scientist (“the sexiest job of the 21st century” according to the less sober than usual Harvard Business Review) has had more column inches devoted to it in recent times. Perhaps in reflection of this, IRM have also asked me to attend the co-located CDO Executive Forum this week. While it can be argued that elements of what a CDO does have been done by people with other titles for many years (I have been one of them), the profile of this role is indisputably a new development and one worth commenting on.

In a way the use of “data” in this title is somewhat misleading. In my experience CDO’s don’t focus exclusively on data (the atomic level), but on the process of turning this into information (basic molecules created from atoms), from which can be drawn insight (more complex molecules containing many sub-units) and which – if the process is to have any value at all – has to finally lead to some form of action [1]. Of course part of the idea of Data Scientists is to go straight from data to insight, but this is less straightforward than might be thought and clearly doesn’t obviate the need for a complementary and more structured approach [2].

Further food for thought for me has been some interesting observations on James Taylor’s blog [3] about the relationship between CDOs and Chief Analytics Officers (the latter perhaps echoing my former ideas around the role of Chief Business Intelligence Officer). He covers whether these should be separate roles, or combined into one, drawing the conclusion that it maybe depends on the maturity of an organisation.

Looking around the market, it seems that CDOs are a varied bunch and come from a number of different backgrounds. I began to think about what might be the core requirements for success in such a role. This led into what can be viewed as a rough and ready recruitment advert. I present my initial ideas below and would welcome any suggestions for change or refinement.

 
Requirements for a CDO:

  1. A desire to do the job full time and not as an add on to existing responsibilities
  2. A background steeped in the journey from data → information → insight → action
  3. A firm grasp of the strategy development process
  4. A thought leader with respect to data and information
  5. Strong leadership credentials
  6. An excellent communicator
  7. Structured approach
  8. Ability to influence as well as set directions
  9. Highly numerate (likely with a post graduate degree in the Physical Sciences or Mathematics) and thus able to commune with analytical staff
  10. Equally a strong understanding of technology and its role in driving success
  11. Experience of implementing Data Governance and improving Data Quality
  12. Experience of delivering and embedding enhanced Information Capabilities

A background in one or more of the following and exposure to / understanding of the majority:

  1. Strategy
  2. Marketing
  3. Commercial
  4. Analytical business disciplines (e.g. Actuarial Science in Insurance, Customer Insight in Retail)
  5. Accounting – not least from a reconciliation point of view
  6. Statistical Analysis
  7. Technology (specifically Information Management)

 

Of the above, the desire to be a full-time CDO is crucial. The only point in having a CDO is if an organisation regards its data and the information it can generate as strategic assets, which require senior stewardship. If they are such assets, then these areas need the whole attention of an executive who is both accountable and whose has the authority to move things forwards. Simply adding data to the plate of an already busy executive in some other area (the CFO, CMO or CIO for example) is highly unlikely to drive a stepped change in business decision-making.

Of course while the above list is necessary background / expertise for a CDO, ticking these boxes will not in and of itself guarantee success. Instead – at least in my opinion – success is likely to be predicated on some rather less novel approaches to driving business change. It is my aspiration to be a bit more regular in my publications and so I plan to cover some of these (as well as talking more about specifics in the data → information → insight → action journey) in coming weeks and months.
 


 
Notes

 
[1]
 
Perhaps equating this to the tertiary structure of macro-molecules might be stretching the point here, but when has that ever stopped me getting the last drop out of an analogy.
 
[2]
 
I covered some similar ground some time ago in Data – Information – Knowledge – Wisdom.
 
[3]
 
James Taylor on EDM – Chief Analytics Officer Summit Opening Keynotes.

The need for collaboration between teams using the same data in different ways

The Data Warehousing Institute

This article is based on conversations that took place recently on the TDWI LinkedIn Group [1].

The title of the discussion thread posted was “Business Intelligence vs. Business Analytics: What’s the Difference?” and the original poster was Jon Dohner from Information Builders. To me the thread topic is something of an old chestnut and takes me back to the heady days of early 2009. Back then, Big Data was maybe a lot more than just a twinkle in Doug Cutting and Mike Cafarella‘s eyes, but it had yet to rise to its current level of media ubiquity.

Nostalgia is not going to be enough for me to start quoting from my various articles of the time [2] and neither am I going to comment on the pros and cons of Information Builders’ toolset. Instead I am more interested in a different turn that discussions took based on some comments posted by Peter Birksmith of Insurance Australia Group.

Peter talked about two streams of work being carried out on the same source data. These are Business Intelligence (BI) and Information Analytics (IA). I’ll let Peter explain more himself:

BI only produces reports based on data sources that have been transformed to the requirements of the Business and loaded into a presentation layer. These reports present KPI’s and Business Metrics as well as paper-centric layouts for consumption. Analysis is done via Cubes and DQ although this analysis is being replaced by IA.

[…]

IA does not produce a traditional report in the BI sense, rather, the reporting is on Trends and predictions based on raw data from the source. The idea in IA is to acquire all data in its raw form and then analysis this data to build the foundation KPI and Metrics but are not the actual Business Metrics (If that makes sense). This information is then passed back to BI to transform and generate the KPI Business report.

I was interested in the dual streams that Peter referred to and, given that I have some experience of insurance organisations and how they work, penned the following reply [3]:

Hi Peter,

I think you are suggesting an organisational and technology framework where the source data bifurcates and goes through two parallel processes and two different “departments”. On one side, there is a more traditional, structured, controlled and rules-based transformation; probably as the result of collaborative efforts of a number of people, maybe majoring on the technical side – let’s call it ETL World. On the other a more fluid, analytical (in the original sense – the adjective is much misused) and less controlled (NB I’m not necessarily using this term pejoratively) transformation; probably with greater emphasis on the skills and insights of individuals (though probably as part of a team) who have specific business knowledge and who are familiar with statistical techniques pertinent to the domain – let’s call this ~ETL World, just to be clear :-).

You seem to be talking about the two of these streams constructively interfering with each other (I have been thinking about X-ray Crystallography recently). So insights and transformations (maybe down to either pseudo-code or even code) from ~ETL World influence and may be adopted wholesale by ETL World.

I would equally assume that, if ETL World‘s denizens are any good at their job, structures, datasets and master data which they create (perhaps early in the process before things get multidimensional) may make work more productive for the ~ETLers. So it should be a collaborative exercise with both groups focused on the same goal of adding value to the organisation.

If I have this right (an assumption I realise) then it all seems very familiar. Given we both have Insurance experience, this sounds like how a good information-focused IT team would interact with Actuarial or Exposure teams. When I have built successful information architectures in insurance, in parallel with delivering robust, reconciled, easy-to-use information to staff in all departments and all levels, I have also created, maintained and extended databases for the use of these more statistically-focused staff (the ~ETLers).

These databases, which tend to be based on raw data have become more useful as structures from the main IT stream (ETL World) have been applied to these detailed repositories. This might include joining key tables so that analysts don’t have to repeat this themselves every time, doing some basic data cleansing, or standardising business entities so that different data can be more easily combined. You are of course right that insights from ~ETL World often influence the direction of ETL World as well. Indeed often such insights will need to move to ETL World (and be produced regularly and in a manner consistent with existing information) before they get deployed to the wider field.

Now where did I put that hairbrush?

It is sort of like a research team and a development team, but where both “sides” do research and both do development, but in complementary areas (reminiscent of a pair of entangled electrons in a singlet state, each of whose spin is both up and down until they resolve into one up and one down in specific circumstances – sorry again I did say “no more science analogies”). Of course, once more, this only works if there is good collaboration and both ETLers and ~ETLers are focussed on the same corporate objectives.

So I suppose I’m saying that I don’t think – at least in Insurance – that this is a new trend. I can recall working this way as far back as 2000. However, what you describe is not a bad way to work, assuming that the collaboration that I mention is how the teams work.

I am aware that I must have said “collaboration” 20 times – your earlier reference to “silos” does however point to a potential flaw in such arrangements.

Peter

PS I talk more about interactions with actuarial teams in: BI and a different type of outsourcing

PPS For another perspective on this area, maybe see comments by @neilraden in his 2012 article What is a Data Scientist and what isn’t?

I think that the perspective of actuaries having been data scientists long before the latter term emerged is a sound one.

I couldn't find a suitable image from Sesame Street :-o

Although the genesis of this thread dates to over five years ago (an aeon in terms of information technology), I think that – in the current world where some aspects of the old divide between technically savvy users [4] and IT staff with strong business knowledge [5] has begun to disappear – there is both an opportunity for businesses and a threat. If silos develop and the skills of a range of different people are not combined effectively, then we have a situation where:

| ETL World | + | ~ETL World | < | ETL World ∪ ~ETL World |

If instead collaboration, transparency and teamwork govern interactions between different sets of people then the equation flips to become:

| ETL World | + | ~ETL World | ≥ | ETL World ∪ ~ETL World |

Perhaps the way that Actuarial and IT departments work together in enlightened insurance companies points the way to a general solution for the organisational dynamics of modern information provision. Maybe also the, by now somewhat venerable, concept of a Business Intelligence Competency Centre, a unified team combining the best and brightest from many fields, is an idea whose time has come.
 
 
Notes

 
[1]
 
A link to the actual discussion thread is provided here. However You need to be a member of the TDWI Group to view this.
 
[2]
 
Anyone interested in ancient history is welcome to take a look at the following articles from a few years back:

  1. Business Analytics vs Business Intelligence
  2. A business intelligence parable
  3. The Dictatorship of the Analysts
 
[3]
 
I have mildly edited the text from its original form and added some new links and new images to provide context.
 
[4]
 
Particularly those with a background in quantitative methods – what we now call data scientists
 
[5]
 
Many of whom seem equally keen to also call themselves data scientists