An in-depth Interview with Allan Engelhardt about Analytics

Cybaea

Allan Engelhardt

PJT Today’s interview is with Allan Engelhardt, co-founder and principal of insights and analytics consultancy Cybaea. Allan and I know each other from when we both worked at Bupa. I was interested to understand the directions that he has been pursuing in recent years.
PJT Allan, we know each other well, but could you provide a pen picture of your career to date and the types of work that you have been engaged in?
AE I started out in experimental physics working on (very) big data from CERN, the large research lab near Geneva, and worked there after getting my degree. Then, like many other physicists, I was recruited into financial services, in my case to do risk management. From there to a consultancy helping business make use of bleeding edge technology and then on to CRM and customer loyalty. This last move was important for me, allowing me to move beyond the technology to be as much about commercial business strategy and operations.

In 2002 a couple of us left the consultancy to help customers move beyond transactional infrastructure, which is really what ‘CRM’ was about at the time, to create high value solution on top, and to create the organizational and commercial ownership of the customer needed to consistently drive value from data, inventing the concept of Customer Value Management which is now universally implemented by telcos across the world and increasingly adopted by other industries.

PJT There is no ISO definition of either insight or analytics. As an expert in these fields, can I ask you to offer your take on the meaning of these terms?
AE To me analytics is about finding meaning from information and data, while insights is about understanding the business opportunities in that meaning. But different people use the terms differently.
PJT I must give you an opportunity to both explain what Cybaea does and how the name came about.
AE At Cybaea we are passionate about value creation and commercial results. We have been called ‘Management consultants with a black belt in data’ and we help organizations identify and act upon data driven opportunities in the areas of:

Cybaea offering

  1. Customer Value Management (CVM), including acquisition, churn, cross-sell, segmentation, and more, across online and offline channels and industries, both B2C and B2B.
  2. Customer Experience and Advocacy, including Net Promoter System and Net Promoter Economics, customer journey optimization, and customer experience.
  3. Innovation and Growth, including data-driven product and proposition development, data monetisation, and distribution and sales strategy.

For our customers, CVM projects typically deliver additional 5% EBITDA growth annually, which you can measure very robustly because much of it is direct marketing. Experience and Advocacy projects typically deliver in the region of 20% EBITDA improvement to our clients, but it is harder to measure accurately because you must go above the line for this level of impact. And for Innovation and Growth, the sky is the limit.

As for the name, we founded the company in 2002 and wanted a short domain name that was a real word. It turned out to be difficult to find an available, short ‘.com’ at the peak of the dot-bomb era! We settled on ‘cybaea’ which my Latin dictionary translated as ‘trading vessel’; historically, it was a type of merchant ship of Greek origin, common in the Mediterranean, which Cicero describes as “most beautiful and richly adorned”. We always say we want to change the name, but it never happens; I guess if it was good enough for Cicero, then it is good enough for us.

PJT While at Bupa you led work that was very beneficial to the organisation and which is now the subject of a public Cybaea case study, can you tell readers a bit more about this?
AE Certainly, and the case study is available at for anyone who wants to read more.

This was working with Bupa Global; a Bupa business unit that primarily provides international private medical insurance for 2 million customers living in over 195 different countries. Towards the end of 2013, Bupa Global set out on a strategic journey to deliver sustained growth. A key element of this was the design and launch of a completely new set of products and propositions, replacing the existing portfolio, with the objective of attracting and servicing new customer segments, complying with changing regulation and meeting customer expectations.

The strategic driver was therefore very much in the Innovation and Growth space we outlined above, and I joined Bupa’s global Leadership Team to create and lead the commercial insights function that would support this change with deep understanding of the target customers and the markets in which they live. Additionally, Bupa had very high ambitions for its Net Promoter programme (Experience and Advocacy) where we delivered the most advanced installation across the global business, and for Customer Value Management we demonstrated nearly 2% reduction in the Claims line (EBITDA) from one single project.

For the new propositions, we initially interviewed over 3,000 individuals on five continents to understand value- and purchase drivers, researched 195 markets to size demand across all customer segments, and further deep-dived into key markets to understand the competitors with products, features, and prices, as well as the regulatory environment, and distribution options. This was supported by a very practical Customer Lifetime Value model, which we developed.

Suffice to say that in two years we had designed and implemented a completely new set of propositions and taken them live in more than twenty priority markets where they replaced the old products.

The strategic and commercial results were clearly delivered. But when I asked our CEO what he thought was the main contribution of the team and the new insights function, he focused on trust: “Every major strategic decision we made was backed by robust data and deep insights in which the executive team had full confidence.”

In a period of change, trust is perhaps the key currency. Trust that you are doing the right things for the right reasons, and the ability to explain why that is. This is key to get everybody behind the changes that need to happen. This is what the scientific method applied to data, analytics, and insights can bring to a commercial organization, and it inspires me to continue what we are doing.

PJT We have both been engaged in what is now generally called the Data arena for many years, some aspects of the technology employed have changed a lot during this time. What do you think modern technology enables today that was harder to achieve in the past and are there any areas where things are much the same as they were a decade or more ago?
AE Ever since the launch of the Amazon EC2 cloud computing service in late 2006 [1], data storage and processing infrastructure has been easily and cheaply available to everybody for most practical workloads. So, for ten years you have not had any excuse for not getting your data in order and doing serious analysis.

The main trend that excites me now is the breakthroughs happening in Deep Learning and Natural Language Processing, expanding the impact of data into completely new areas. This is great for consumers and for those companies that are at the leading edge of analytics and insights. For other organizations, however, who are struggling to deliver value from data, it means that the gap between where they are versus best practice is widening exponentially, which is a big worry.

PJT Taking technology to one side, what do you think are the main factors in successfully generating insight and developing analytical capabilities that are tightly coupled with value generation?
AE Two things are always at the forefront of my mind. The first is kind of obvious, namely to start with the business value you are trying to create and work backwards from that. Too often we see people start with the data (‘I got to clean all the data in my warehouse first!’), the technology (‘We need some Big Data infrastructure!’), or the analytics (‘We need a predictive churn model!’). That is cart before the horse. Not that these things are not important; rather, that there are almost certainly a lot of opportunities you could execute right now to generate real and measurable business value and drive a faster return on your investments.

The second is to not under-estimate the business change that is needed to exploit the insights. Analytical leaders have appetite for change and they plan and resource accordingly. Data and models are only part of the project to deliver the value and they are really clear on this.

PJT Looking at the other side of the coin, what at the pitfalls to look out for and do you have any recommendations for avoiding them?
AE The flip-side of the two points previously mentioned are obvious pitfalls: not starting from the business change and value you are trying to create. And it is not easy: great data scientists are not always great commercially-minded business people and so you need the right kind of skills to bridge that gap. McKinsey talks of ‘business translators who combine data savvy with industry and functional expertise’, which is a helpful summary [2]. Less helpfully they also note that these people are nearly impossible to find, so you may need to find or grow them internally.

Which gets to a second pitfall. When thinking about generating value from data, many want to do it all themselves. And I understand why: after all, data may well be a strategic asset for your organization.

But when you recruit, you should be clear in your mind if you are recruiting to deliver the change of creating the first models and changed business processes, or if you are recruiting to sustain the change by keeping the models current and incrementally improving the insights and processes. These two outcomes require people with quite different skills and vastly different temperaments.

We call them Explorers versus Farmers.

For the first, you want commercially-focused business people who can drive change in the organization; who can make things work quickly, whether that is data, analytics, or business processes, to demonstrate value; and who are supremely comfortable with uncertainties and unknowns.

For the second, you want people who are technically skilled to deliver and maintain the optimal stable platform and who love doing incremental improvements to technology, data, and business processes.

Explorers versus Farmers. Call them what you will, but note that they are different.

PJT Many companies are struggling with how to build analytical teams. Do they grow their own talent, do they hire numerate graduates or post graduates, do they seek to employ highly skilled and experienced individuals, do they form partnerships with external parties, or is a mixture of all of these approaches sensible? What approaches do you see at Cybaea clients adopting?
AE We are mostly seeing one of two approaches: one is to do nothing and soldier on as always relying on traditional business intelligence while the other is to hire usually highly technical people to build an internal team. Neither is optimal in getting to the value.

The do-nothing approach can make sense. Not, however, when it is adopted because management fears change (change will happen, regardless) or because they feel they don’t understand data (everybody understands data if it is communicated well). Those companies are just leaving money on the table: every organization have quick wins that can deliver value in weeks.

But it may be that you have no capacity for change and have made the informed decision that data and analytics must wait, reflecting the commercial reality. The key here is ‘informed’ and the follow-on question is if there are other ways that the company can realise some of the value from data right now.

The second approach at least recognises the value potential of data and aims to move the organization towards realising that value. But it is back to those ‘business translator’ roles we discussed before and making sure you have them, as well as making sure the business is aligned around the change that will be needed. Making money from data is a business function, not a technical one, and the function that drives the change must sit within the commercial business, not in IT or some other department that is still an arms-length support function.

We see the best organizations, the analytical leaders, employing flexible approaches. They focus on the outcomes and they have a sense of urgency driven from the top. They make it work.

PJT I know that a concept you are very interested in is Analytics as a Service (AaaS). Can you tell readers some more about what this means and also the work that Cybaea is doing in this area?
AE There is a war on analytical talent and a ‘winner takes it all’ dynamic is emerging with medium-sized enterprises especially losing out. Good people want to work with good people which generates a strong network effect giving advantage to large organizations with larger analytical teams and more variety of applications. Leading firms have depth of analytical talent and can recruit, trial, and filter more candidates, leaving them with the best talent.

Our analytics-as-a-service offering is for organizations of any size who want to realise value from data and insights right now, but who are not yet ready to build their own internal teams. We partner with the commercial teams to be their (commercial) insights function and deliver not just reports but real business change. Customers can pay monthly, pay for results, or we can do a build-operate-transfer model.

One of our first projects was with a small telco. They were too small to maintain a strong analytical team in-house, purely because of scale. We set up a monthly workshop with the commercial Marketing team. We analysed their data offline and used the time for a structured conversation about the new campaigns and the new changes to the web site they should implement this month. We would point them to our reports and dashboards which had models, graphs, t-tests, and p-values in abundance, but would focus the conversation on moving the business forward.

The following month we would repeat and identify new campaigns and new changes. After six months, they had more than 20 highly effective and precisely targeted campaigns running, and we handed over the maintenance (‘farming’) of the models to their IT teams. It is a model that works well across industries.

PJT Do you have a view on how the insights and analytics field is likely to change in coming years? Are there any emerging areas which you think readers should keep an eye on?
AE Many people are focused on the data explosion that is often called the ‘Internet of Things’ but more broadly means that more data gets generated and we consume more data for our analytics. I do think this opens tremendous opportunities for many businesses and technically I am excited to get back to processing live event streams as they happen.

But practically, we are seeing more success from deep learning. We have found that once an organization successfully implements one solution, whether artificial intelligence or complex natural language processing, then they want more. It is that powerful and that transformational, and breakthroughs in these fields are further expanding the impact into completely new area. My advice is that most organizations should at least trial what these approaches can do for them, and we have set up a sister-organization to develop and deliver solutions here.

PJT What are your plans for Cybaea in coming months?
AE I have two main priorities. First, I have our long-standing partner from India in London for a couple of months to figure out how we scale in the UK. This is for the analytics as a service but also for fast projects to deliver insights or analytical tools and applications.

Second, I am looking to identify the right partners and associates for Cybaea here in the UK to allow us to grow the business. We have great assets in our methodologies, clients, and people, and a tremendous opportunity for delivering commercial value from data, so I am very excited for the future.

PJT Allan, I would like to thank you for sharing with us the benefit of your experience and expertise in data matters, both of which have been very illuminating.

Allan Engelhardt can be reached at Allan.Engelhardt@cybaea.net. Cybaea’s website is www.cybaea.net and they have social media presence on LinkedIn and Google+.
 


 
Disclosure: Neither peterjamesthomas.com Ltd. nor any of its directors have any direct financial interest in either Cybaea or any of the other organisations mentioned in this article.
 
 
Notes

 
[1]
 
https://aws.amazon.com/about-aws/whats-new/2006/08/24/announcing-amazon-elastic-compute-cloud-amazon-ec2—beta/
 
[2]
 
McKinsey report The Age of Analytics, dated December 2016, http://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-age-of-analytics-competing-in-a-data-driven-world


 

 

Knowing what you do not Know

Measure twice cut once

As readers will have noticed, my wife and I have spent a lot of time talking to medical practitioners in recent months. The same readers will also know that my wife is a Structural Biologist, whose work I have featured before in Data Visualisation – A Scientific Treatment [1]. Some of our previous medical interactions had led to me thinking about the nexus between medical science and statistics [2]. More recently, my wife had a discussion with a doctor which brought to mind some of her own previous scientific work. Her observations about the connections between these two areas have formed the genesis of this article. While the origins of this piece are in science and medicine, I think that the learnings have broader applicability.


So the general context is a medical test, the result of which was my wife being told that all was well [3]. Given that humans are complicated systems (to say the very least), my wife was less than convinced that just because reading X was OK it meant that everything else was also necessarily OK. She contrasted the approach of the physician with something from her own experience and in particular one of the experiments that formed part of her PhD thesis. I’m going to try to share the central point she was making with you without going in to all of the scientific details [4]. However to do this I need to provide at least some high-level background.

Structural Biology is broadly the study of the structure of large biological molecules, which mostly means proteins and protein assemblies. What is important is not the chemical make up of these molecules (how many carbon, hydrogen, oxygen, nitrogen and other atoms they consist of), but how these atoms are arranged to create three dimensional structures. An example of this appears below:

The 3D structure of a bacterial Ribosome

This image is of a bacterial Ribosome. Ribosomes are miniature machines which assemble amino acids into proteins as part of the chain which converts information held in DNA into useful molecules [5]. Ribosomes are themselves made up of a number of different proteins as well as RNA.

In order to determine the structure of a given protein, it is necessary to first isolate it in sufficient quantity (i.e. to purify it) and then subject it to some form of analysis, for example X-ray crystallography, electron microscopy or a variety of other biophysical techniques. Depending on the analytical procedure adopted, further work may be required, such as growing crystals of the protein. Something that is generally very important in this process is to increase the stability of the protein that is being investigated [6]. The type of protein that my wife was studying [7] is particularly unstable as its natural home is as part of the wall of cells – removed from this supporting structure these types of proteins quickly degrade.

So one of my wife’s tasks was to better stabilise her target protein. This can be done in a number of ways [8] and I won’t get into the technicalities. After one such attempt, my wife looked to see whether her work had been successful. In her case the relative stability of her protein before and after modification is determined by a test called a Thermostability Assay.

Sigmoidal Dose Response Curve A
© University of Cambridge – reproduced under a Creative Commons 2.0 licence

In the image above, you can see the combined results of several such assays carried out on both the unmodified and modified protein. Results for the unmodified protein are shown as a green line [9] and those for the modified protein as a blue line [10]. The fact that the blue line (and more particularly the section which rapidly slopes down from the higher values to the lower ones) is to the right of the green one indicates that the modification has been successful in increasing thermostability.

So my wife had done a great job – right? Well things were not so simple as they might first seem. There are two different protocols relating to how to carry out this thermostability assay. These basically involve doing some of the required steps in a different order. So if the steps are A, B, C and D, then protocol #1 consists of A ↦ B ↦ C ↦ D and protocol #2 consists of A ↦ C ↦ B ↦ D. My wife was thorough enough to also use this second protocol with the results shown below:

Sigmoidal Dose Response Curve B
© University of Cambridge – reproduced under a Creative Commons 2.0 licence

Here we have the opposite finding, the same modification to the protein seems to have now decreased its stability. There are some good reasons why this type of discrepancy might have occurred [11], but overall my wife could not conclude that this attempt to increase stability had been successful. This sort of thing happens all the time and she moved on to the next idea. This is all part of the rather messy process of conducting science [12].

I’ll let my wife explain her perspective on these results in her own words:

In general you can’t explain everything about a complex biological system with one set of data or the results of one test. It will seldom be the whole picture. Protocol #1 for the thermostability assay was the gold standard in my lab before the results I obtained above. Now protocol #1 is used in combination with another type of assay whose efficacy I also explored. Together these give us an even better picture of stability. The gold standard shifted. However, not even this bipartite test tells you everything. In any complex system (be that Biological or a complicated dataset) there are always going to be unknowns. What I think is important is knowing what you can and can’t account for. In my experience in science, there is generally much much more that can’t be explained than can.

Belt and Braces [or suspenders if you are from the US, which has quite a different connotation in the UK!]

As ever translating all of this to a business context is instructive. Conscientious Data Scientists or business-focussed Statisticians who come across something interesting in a model or analysis will always try (where feasible) to corroborate this by other means; they will try to perform a second “experiment” to verify their initial findings. They will also realise that even two supporting results obtained in different ways will not in general be 100% conclusive. However the highest levels of conscientiousness may be more honoured in breach than observance [13]. Also there may not be an alternative “experiment” that can be easily run. Whatever the motivations or circumstances, it is not beyond the realm of possibility that some Data Science findings are true only in the same way that my wife thought she had successfully stabilised her protein before carrying out the second assay.

I would argue that business will often have much to learn from the levels of rigour customary in most scientific research [14]. It would be nice to think that the same rigour is always applied in commercial matters as academic ones. Unfortunately experience would tend to suggest the contrary is sometimes the case. However, it would also be beneficial if people working on statistical models in industry went out of their way to stress not only what phenomena these models can explain, but what they are unable to explain. Knowing what you don’t know is the first step towards further enlightenment.
 


 
Notes

 
[1]
 
Indeed this previous article had a sub-section titled Rigour and Scrutiny, echoing some of the themes in this piece.
 
[2]
 
See More Statistics and Medicine.
 
[3]
 
As in the earlier article, apologies for the circumlocution. I’m both looking to preserve some privacy and save the reader from boredom.
 
[4]
 
Anyone interested in more information is welcome to read her thesis which is in any case in the public domain. It is 188 pages long, which is reasonably lengthy even by my standards.
 
[5]
 
They carry out translation which refers to synthesising proteins based on information carried by messenger RNA, mRNA.
 
[6]
 
Some proteins are naturally stable, but many are not and will not survive purification or later steps in their native state.
 
[7]
 
G Protein-coupled Receptors or GPCRs.
 
[8]
 
Chopping off flexible sections, adding other small proteins which act as scaffolding, getting antibodies or other biological molecules to bind to the protein and so on.
 
[9]
 
Actually a sigmoidal dose-response curve.
 
[10]
 
For anyone with colour perception problems, the green line has markers which are diamonds and the blue line has markers which are triangles.
 
[11]
 
As my wife writes [with my annotations]:

A possible explanation for this effect was that while T4L [the protein she added to try to increase stability – T4 Lysozyme] stabilised the binding pocket, the other domains of the receptor were destabilised. Another possibility was that the introduction of T4L caused an increase in the flexibility of CL3, thus destabilising the receptor. A method for determining whether this was happening would be to introduce rigid linkers at the AT1R-T4L junction [AT1R was the protein she was studying, angiotensin II type 1 receptor], or other placements of T4L. Finally AT1R might exist as a dimer and the addition of T4L might inhibit the formation of dimers, which could also destabilise the receptor.

© University of Cambridge – reproduced under a Creative Commons 2.0 licence

 
[12]
 
See also Toast.
 
[13]
 
Though to be fair, the way that this phrase is normally used today is probably not what either Hamlet or Shakespeare intended by it back around 1600.
 
[14]
 
Of course there are sadly examples of specific scientists falling short of the ideals I have described here.

 

 

Elephants’ Graveyard?

Elephants' Graveyard
 
Introduction

My young daughter is very fond of elephants [1], as indeed am I, so I need to tread delicately here. I recent years, the world has been consumed with Big Data Fever [2] and this has been intimately entwined with Hadoop of yellow elephant fame. Clearly there are very many other products such as Apache [insert random word here] [3] which are part of the Big Data ecosystem, but it is Hadoop that has become synonymous with Big Data and indeed conflated with many of the other Big Data technologies.

Hadoop the Elephant

I have seen some successful and innovative Big Data projects and there are clearly many benefits associated with the cluster of technologies that this term is used to describe. There are also any number of paeans to this new paradigm a mouse click, or finger touch, away [4]; indeed I have featured some myself in these pages [5]. However, what has struck me of late is that a few less positive articles have been appearing. I come to neither bury, nor praise Hadoop [6], but merely to reflect on this development. I will also touch on recent rumours that one of the Apache tribe [7], specifically Spark, may be seeking an amicable divorce from Hadoop proper [8].

In doing this, I am going to draw on two articles in particular. First Hadoop Is Falling by George Hill (@IE_George) on The Innovation Enterprise. Second The Hadoop Honeymoon is Over [9] by Martyn Richard Jones (@GoodStratTweet) on LinkedIn.

However, before I leap into analysing other people’s thoughts I will present some of my own [very basic] research, care of Google Trends.
 
 
Eine Kleine Nachtgoogling

Below I display two charts (larger versions are but a click away) tracking the volume of queries in the 2014-16 period for two terms: “hadoop” and “apache spark” [10]. On the assumption that California tends to lead trends more than it follows, I have focussed in on this part of the US.

Hadoop searches

Spark searches

Note on axes: On this blog I have occasionally spoken about the ability of images to conceal information as well as to reveal it [11]. Lest I am accused of making the same mistake, normalising both sets of data in the above graphs could give the misleading impression that the peak volume of queries for “hadoop” and “apache spark” are equivalent. This is not so. The maximum number of weekly queries for “apache spark” in the three years examined is just under a fifth of the maximum number of queries for “hadoop” [12]. So, applying a rather broad rule of thumb, people searched for “hadoop” around five times more often. However, it was not the absolute number of queries that I was interested in, but how these change over time, so I think the approach I have taken is justified. If I had not normalised, it would have been difficult to pick out the “apache spark” trend in a combined graph.

The obvious inference to be drawn is that searches for Hadoop (in California at least) are declining and those for Spark are increasing; though maybe with a bit of a fall off in volume recently. Making a cast iron connection between trends in search and trends in industry is probably a mistake [13], but the discrepancies in the two trends are at least suggestive. In the Application Development Trends article I reference (note [8]) the author states:

The Spark momentum is so great that the technology — originally positioned as a replacement for MapReduce with added real-time capabilities and in-memory processing — could break free from the reins of the Hadoop universe and become its own independent tool.

This chimes with the AtScale findings I also reported here (note [5]), which included the observation that:

Organizations who have deployed Spark in production are 85% more likely to achieve value.

One conclusion (albeit a rather tentative one) could be that while Spark is on an upward trajectory and perhaps likely to step out of the Hadoop shadow, interest in Hadoop itself is at best plateauing and possibly declining. It is against this backdrop that I’ll now consider the two articles I introduced earlier.
 
 
Trouble with Trunks

Bad Elephant!

In his article, George Hill begins by noting that:

[Hadoop] adoption appears to have more or less stagnated, leading even James Kobielus [@jameskobielus], Big Data Evangelist at IBM Analytics [14], to claim that “Hadoop declined more rapidly in 2016 from the big-data landscape than I expected” [15]

In search for a reasons behind this apparent stagnation, he hypothesises that:

[A] cause for concern is simply that one man’s big data is another man’s small data. Hadoop is designed for huge amounts of data, and as Kashif Saiyed [@rizkashif] wrote on KD Nuggets [16] “You don’t need Hadoop if you don’t really have a problem of huge data volumes in your enterprise, so hundreds of enterprises were hugely disappointed by their useless 2 to 10TB Hadoop clusters – Hadoop technology just doesn’t shine at this scale.”

Most companies do not currently have enough data to warrant a Hadoop rollout, but did so anyway because they felt they needed to keep up with the Joneses. After a few years of experimentation and working alongside genuine data scientists, they soon realize that their data works better in other technologies.

Martyn Richard Jones weighs in on this issue in more provocative style when he says:

Hadoop has grown, feature by feature, as a response to specific technical challenges in specific and somewhat peculiar businesses. When it all kicked off, the developers weren’t thinking about creating a new generic data management architecture, one for handling massive amounts of data. They were thinking of how to solve specific problems. Then it rather got out of hand, and the piecemeal scope grew like topsy as did the multifarious ways to address the product backlog.

and aligns himself with Kashif Saiyed’s comments by adding:

It also turns out that, in spite of the babbling of the usual suspects, Big Data is not for everyone, not everyone needs it, and even if some businesses benefit from analysing their data, they can do smaller Big Data using conventional rock-solid, high-performance and proven database technologies, well-architected and packaged technologies that are in wide use.

I have been around the data space long enough to have seen a number of technologies emerge, each of which was touted as solving all known problems. These included Executive Information Systems, Relational Databases, Enterprise Resource Planning, Data Warehouses, OLAP, Business Intelligence Suites and Customer Relationship Management systems. All are useful tools, I have successfully employed each of them, but at the end of the day, they are all technologies and technologies don’t sort out problems, people do [17]. Big Data enables us to address some new problems (and revisit some old ones) in novel ways and lets us do things we could not do before. However, it is no more a universal panacea than anything that has preceded it.

Gartner Hype Cycle

Big Data seems to have disappeared off of the Gartner hype cycle in 2016, perhaps as it is now viewed as having become mainstream. However, back in August 2015, it was heading downhill fast towards the rather cataclysmically named Trough of Disillusionment [18]. This reflects the unwavering fact that no technology ever lives up to its initial hype. Instead, after a period of being over-sold and an inevitable reaction to this, technologies settle down and begin to be actually useful. It seems that Gartner believes that Big Data has already gone through this rite of passage; they may well be correct in this assertion.

Hill references this himself in one of his closing comments, while ending on a more positive note:

[…] it is not the platform in itself that has caused the current issues. Instead it is perhaps the hype and association of Big Data that has done the real damage. Companies have adopted the platform without understanding it and then failed to get the right people or data to make it work properly, which has led to disillusionment and its apparent stagnation. There is still a huge amount of life in Hadoop, but people just need to understand it better.

For me there are loud and clear echos of other technologies “failing” in the past in what Hill says [19]. My experience in these other cases is that, while technologies may not have lived up to implausible initial claims, when they do genuinely fail, it is often for reasons that are all too human [20].
 
 
Summary

A racquet is a tool, right?

I had considered creating more balance in this article by adding a section making the case for the defence. I then realised that this was actually a pretty pointless exercise. Not because Hadoop is in terminal decline and denial of this would be indefensible. Not because it must be admitted that Big Data is over-hyped and under-delivers. Cases could be made that both of those statements are either false, or at least do not tell the whole story. However I think that arguments like these are the wrong things to focus on. Let me try to explain why.

Back in 2009 I wrote an article with the title A bad workman blames his [Business Intelligence] tools. This considered the all-too-prevalent practice in rock climbing and bouldering circles of buying the latest and greatest kit and assuming that performance gains would follow from this, as opposed to doing the hard work of training and practice (the same phenomenon occurs in other sports of course). I compared this to BI practitioners relying on technology as a crutch rather than focussing on four much more important things:

  1. Determining what information is necessary to drive key business decisions.
     
  2. Understanding the various data sources that are available and how they relate to each other.
     
  3. Transforming the data to meet the information needs.
     
  4. Managing the embedding of BI in the corporate culture.

I am often asked how relevant my heritage articles are to today’s world of analytics, data management, machine learning and AI. My reply is generally that what has changed is technology and little else [21]. This means that what was relevant back in 2009 remains relevant today; sometimes more so. The only area with a strong technological element in the list of four I cite above is number 3. I would agree that a lot has happened in the intervening years around how this piece can be effected. However, nothing has really changed in the other areas. We may call business questions use cases or user stories today, but they are the same thing. You still can’t really leverage data without attempting to understand it first. The need for good communication about data projects, high-quality education and strong follow-up is just as essential as it ever was.

Below I have taken the liberty of editing my own text, replacing the terms that were prevalent in data and information circles then, with the current ones.

Well if you want people to actually use analytics capabilities, it helps if the way that the technology operates is not a hindrance to this. Ideally the ease-of-use and intuitiveness of the analytical platform deployed should be a plus point for you. However, if you have the ultimate in data technology, but your analytics do not highlight areas that business people are interested in, do not provide information that influences actual decision-making, or contain numbers that are inaccurate, out-of-date, or unreconciled, then they will not be used.

I stand by these sentiments seven or eight years later. Over time the technology and terminology we use both change. I would argue that the essentials that determine success or failure seldom do.

Let’s take the undeniable hype cycle effect to one side. Let’s also discount overreaching claims that Hadoop and its related technologies are Swiss Army Knives, capable of dealing with any data situation. Let’s also set aside the string of technical objections that Martyn Richard Jones raises. My strong opinion is that when Hadoop (or Spark or the next great thing) fails, it will again most likely be a case of bad workmen blaming their tools; just as they did back in 2009.
 


 
Notes

 
[1]
 
As was Doug Cutting‘s son back in 2006. Rather than being yellow, my daughter’s favourite pachyderm is blue and called “Dee”, my wife and I have no idea why.
 
[2]
 
WHO have described the Big Data Fever situation as follows:

Phase 6, the pandemic phase, is characterized by community level outbreaks in at least one other country in a different WHO region in addition to the criteria defined in Phase 5. Designation of this phase will indicate that a global pandemic is under way.

 
[3]
 
Pick any one of: Cassandra, Flink, Flume, HBase, Hive, Impala, Kafka, Oozie, Phoenix, Pig, Spark, Sqoop, Storm and ZooKeeper.
 
[4]
 
You could start with the LinkedIn Big Data Channel.
 
[5]
 
Do any technologies grow up or do they only come of age?
 
[6]
 
The evil that open-source frameworks do lives after them; The good is oft interred with their source code; So let it be with Hadoop.
 
[7]
 
Perhaps not very respectful to Native American sensibilities, but hard to resist. No offence is intended.
 
[8]
 
Spark Poised To Break from Hadoop, Move to Cloud, Survey Says, Application Development Trends.
 
[9]
 
While functioning at the point that this article was originally written, it now appears that Martyn Richard Jones’s LinkedIn account has been suspended and the article I refer to is no longer available. The original URL was https://www.linkedin.com/pulse/hadoop-honeymoon-over-martyn-jones. I’m not sure what the issue is and whether or not the article may reappear at some later point.
 
[10]
 
A couple of points here. As “spark” is a word in common usage, the qualifier of “apache” is necessary. On the contrary, “hadoop” is not a name that is used for much beyond yellow elephants and so no qualifier is required. I could have used “apache hadoop” as the comparator, but instances of this are less frequent than for just “hadoop”. For what it is worth, although the number of queries for “apache hadoop” are fewer, the trend over time is pretty much the same as for just “hadoop”.
 
[11]
 
For example:

 
[12]
 
18% to be precise.
 
[13]
 
Though quite a few people make a nice living doing just that.
 
[14]
 
“IBM Software” in the original article, corrected to “IBM Analytics” here.
 
[15]
 
Big Data: Main Developments in 2016 and Key Trends in 2017, KD Nuggets.
 
[16]
 
Why Not So Hadoop?, KD Nuggets.
 
[17]
 
Though admittedly nowadays people sometimes sort problems by writing algorithms for machines to run, which then come up with the answer.
 
[18]
 
Which has always felt to me that it should appear on a papyrus map next to a “here be dragons” legend.
 
[19]
 
For example as in “Why Business Intelligence projects fail”.
 
[20]
 
It’s worth counting how many of the risks I enumerate in 20 Risks that Beset Data Programmes are human-centric (hint: its a multiple of ten biger than 15 and smaller than 25).
 
[21]
 
I might be tempted to answer a little differently when it comes to Artificial Intelligence.

 

 

Bigger and Better (Data)?

Is bigger really better

I was browsing Data Science Central [1] recently and came across an article by Bill Vorhies, President & Chief Data Scientist of Data-Magnum. The piece was entitled 7 Cases Where Big Data Isn’t Better and is worth a read in full. Here I wanted to pick up on just a couple of Bill’s points.

In his preamble, he states:

Following the literature and the technology you would think there is universal agreement that more data means better models. […] However […] it’s always a good idea to step back and examine the premise. Is it universally true that our models will be more accurate if we use more data? As a data scientist you will want to question this assumption and not automatically reach for that brand new high-performance in-memory modeling array before examining some of these issues.

Bill goes on to make several pertinent points including: that if your data is bad, having more of it is not necessarily a solution; that attempting to create a gigantic and all-purpose model may well be inferior to multiple, more targeted models on smaller sub-sets of data; and that there exist specific instances where a smaller data sets yields greater accuracy [2]. However I wanted to pick up directly on Bill’s point 6 of 7, in which he also references Larry Greenemeier (@lggreenemeier) of Scientific American.

  Bill Vorhies   Larry Greenemeier  

6. Sometimes We Get Hypnotized By the Overwhelming Volume of the Data and Forget About Data Provenance and Good Project Design

A few months back I reviewed an article by Larry Greenemeier [3] about the failure of Google Flu Trend analysis to predict the timing and severity of flu outbreaks based on social media scraping. It was widely believed that this Big Data volume of data would accurately predict the incidence of flu but the study failed miserably missing timing and severity by a wide margin.

Says Greenemeier, “Big data hubris is the often the implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis. The mistake of many big data projects, the researchers note, is that they are not based on technology designed to produce valid and reliable data amenable for scientific analysis. The data comes from sources such as smartphones, search results and social networks rather than carefully vetted participants and scientific instruments”.

Perhaps more pertinent to a business environment, Greenemeier’s article also states:

Context is often lacking when info is pulled from disparate sources, leading to questionable conclusions.

Ruler

Neither of these authors is saying that having greater volumes of data is a definitively bad thing; indeed Vorhies states:

In general would I still prefer to have more data than less? Yes, of course.

They are however both pointing out that, in some instances, more traditional statistical methods, applied to smaller data sets yield superior results. This is particularly the case where data are repurposed and the use to which they are put is different to the considerations when they were collected; something which is arguably more likely to be the case where general purpose Big Data sets are leveraged without reference to other information.

Also, when large data sets are collated from many places, the data from each place can have different characteristics. If this variation is not controlled for in models, it may well lead to erroneous findings.

Statistical Methods

Their final observation is that sound statistical methodology needs to be applied to big data sets just as much as more regular ones. The hope that design flaws will simply evaporate when data sets get large enough may be seducing, but it is also dangerously wrong.

Vorhies and Greenemeier are not suggesting that Big Data has no value. However they state that one of its most potent uses may well be as a supplement to existing methods, perhaps extending them, or bringing greater granularity to results. I view such introspection in Data Science circles as positive, likely to lead to improved methods and an indication of growing maturity in the field. It is however worth noting that, in some cases, leverage of Small-but-Well-Designed Data [4] is not only effective, but actually a superior approach. This is certainly something that Data Scientists should bear in mind.
 


 
Notes

 
[1]
 
I’d recommend taking a look at this site regularly. There is a high volume of articles and the quality is variable, but often there are some stand-out pieces.
 
[2]
 
See the original article for the details.
 
[3]
 
The article was in Scientific American and entitled Why Big Data Isn’t Necessarily Better Data.
 
[4]
 
I may have to copyright this term and of course the very elegant abridgement, SBWDD.

 

 

A Sweeter Spot for the CDO?

Home run

I recently commented on an article by Bruno Aziza (@brunoaziza) from AtScale [1]. As mentioned in this earlier piece, Bruno and I have known each other for a while. After I published my article – and noting my interest in all things CDO [2] – he dropped me a line, drawing my attention to a further piece he had penned: CDOs: They Are Not Who You Think They Are. As with most things Bruno writes, I’d suggest it merits taking a look. Here I’m going to pick up on just a few pieces.

First of all, Bruno cites Gartner saying that:

[…] they found that there were about 950 CDOs in the world already.

In one way that’s a big figure, in another, it is a small fraction of the at least medium-sized companies out there. So it seems that penetration of the CDO role still has some way to go.

Bruno goes on to list a few things which he believes a CDO is not (e.g. a compliance officer, a finance expert etc.) and suggests that the CDO role works best when reporting to the CEO [3], noting that:

[…] every CEO that’s not analytically driven will have a hard time gearing its company to success these days.

He closes by presenting the image I reproduce below:

CDO Venn Diagram [borrowed from AtScale]

and adding the explanatory note:

  • The CDO is at the intersection of Innovation, Compliance and Data Expertise. When all he/she just does is compliance, it’s danger. They will find resistance at first and employees will question the value the CDO office adds to the company’s bottom line.

First of all kudos for a correct use of the term Venn Diagram [4]. Second I agree that the role of CDO is one which touches on many different areas. In each of these, while as Bruno says, the CDO may not need to be an expert, a working knowledge would be advantageous [5]. Third I wholeheartedly support the assertion that a CDO who focusses primarily on compliance (important as that may well be) will fail to get traction. It is only by blending compliance work with the leveraging of data for commercial advantage in which organisations will see value in what a CDO does.

Finally, Bruno’s diagram put me in mind of the one I introduced in The Chief Data Officer “Sweet Spot”. In this article, the image I presented touched each of the principle points of a compass (North, South, East and West). My assertion was that the CDO needed to sit at the sweet spot between respectively Data Synthesis / Data Compliance and Business Expertise / Technical Expertise. At the end of this piece, I suggested that in reality the intervening compass points (North West, South East, North East and South West) should also appear, reflecting other spectrums that the CDO needs to straddle. Below I have extended my earlier picture to include these other points and labeled the additional extremities between which I think any successful CDO must sit. Hopefully I have done this in a way that is consistent with Bruno’s Venn diagram.

Expanded CDO Sweet Spot

The North East / South West axis is one I mentioned in passing in my earlier text. While in my experience business is seldom anything but usual, BAU has slipped into the lexicon and it’s pointless to pretend that it hasn’t. Equally Change has come to mean big and long-duration change, rather than the hundreds of small changes that tend to make up BAU. In any case, regardless of the misleading terminology, the CDO must be au fait with both types of activity. The North West / South East axis is new and inspired by Bruno’s diagram. In today’s business climate, I believe that the successful CDO must be both innovative and have an ability to deliver on ideas that he or she generates.

As I have mentioned before, finding someone who sits at the nexus of either Bruno’s diagram or mine is not a trivial exercise. Equally, being a CDO is not a simple job; then very few worthwhile things are easy to achieve in my experience.
 


 
Notes

 
[1]
 
Do any technologies grow up or do they only come of age?
 
[2]
 
A selection of CDO-centric articles, in chronological order:

* At least that’s the term I was using to describe what is now called a Chief Data Officer back in 2009.

 
[3]
 
Theme #1 in 5 Themes from a Chief Data Officer Forum
 
[4]
 
I have got this wrong myself in these very pages, e.g. in A Single Version of the Truth?, in the section titled Ordo ab Chao. I really, really ought to know better!
 
[5]
 
I covered some of what I see as being requirements of the job in Wanted – Chief Data Officer.

 

 

Predictions about Prediction

2017 the Road Ahead [Borrowed from Eckerson Group]

   
“Prediction and explanation are exactly symmetrical. Explanations are, in effect, predictions about what has happened; predictions are explanations about what’s going to happen.”

– John Rogers Searle

 

The above image is from Eckerson Group‘s article Predictions for 2017. Eckerson Group’s Founder and Principal Consultant, Wayne Eckerson (@weckerson), is someone whose ideas I have followed on-line for several years; indeed I’m rather surprised I have not posted about his work here before today.

As was possibly said by a variety of people, “prediction is very difficult, especially about the future” [1]. I did turn my hand to crystal ball gazing back in 2009 [2], but the Eckerson Group’s attempt at futurology is obviously much more up-to-date. As per my review of Bruno Aziza’s thoughts on the AtScale blog, I’m not going to cut and paste the text that Wayne and his associates have penned wholesale, instead I’d recommend reading the original article.

Here though are a number of points that caught my eye, together with some commentary of my own (the latter appears in italics below). I’ll split these into the same groups that Wayne & Co. use and also stick to their indexing, hence the occasional gaps in numbering. Where I have elided text, I trust that I have not changed the intended meaning:
 
 
Data Management

Data Management

1. The enterprise data marketplace becomes a priority. As companies begin to recognize the undesirable side effects of self-service they are looking for ways to reap self-service benefits without suffering the downside. […] The enterprise data marketplace returns us to the single-source vision that was once touted as the real benefit of Enterprise Data Warehouses.
  I’ve always thought of self-service as something of a cop-out. It tends to avoid data teams doing anything as arduous (and in some cases out of their comfort zone) as understanding what makes a business tick and getting to grips with the key questions that an organisation needs to answer in order to be successful [3]. With this messy and human-centric stuff out of the way, the data team can retreat into the comfort of nice orderly technological matters or friendly statistical models.

However, what Eckerson Group describe here is “an Amazon-like data marketplace”, which it seems to me has more of a chance of being successful. However, such a marketplace will only function if it embodies the same focus on key business questions and how they are answered. The paradigm within which such questions are framed may be different, more community based and more federated for example, but the questions will still be of paramount importance.

 
3.
 
New kinds of data governance organizations and practices emerge. Long-standing, command-and-control data governance practices fail to meet the challenges of big data and of data democratization. […]
  I think that this is overdue. To date Data Governance, where it is implemented at all, tends to be too police-like. I entirely agree that there are circumstances in which a Data Governance team or body needs to be able to put its foot down [4], but if all that Data Governance does is police-work, then it will ultimately fail. Instead good Data Governance needs to recognise that it is part of a much more fluid set of processes [5], whose aim is to add business value; to facilitate things being done as well as sometimes to stop the wrong path being taken.

 
Data Science

Data Science

1. Self-service and automated predictive analytics tools will cause some embarrassing mistakes. Business users now have the opportunity to use predictive models but they may not recognize the limits of the models themselves. […]
  I think this is a very valid point. As well as not understanding the limitations of some models [6], there is not widespread understanding of statistics in many areas of business. The concept of a central prediction surrounded by different outcomes with different probabilities is seldom seen in commercial circles [7]. In addition there seems to be a lack of appreciation of how big an impact the statistical methodology employed can have on what a model tells you [8].

 
Business Analytics

Business Analytics

1. Modern analytic platforms dominate BI. Business intelligence (BI) has evolved from purpose-built tools in the 1990s to BI suites in the 2000s to self-service visualization tools in the 2010s. Going forward, organizations will replace tools and suites with modern analytics platforms that support all modes of BI and all types of users […]
  Again, if it comes to fruition, such consolidation is overdue. Ideally the tools and technologies will blend into the background, good data-centric work is never about the technology and always about the content and the efforts involved in ensuring that it is relevant, accurate, consistent and timely [9]. Also information is often of most use when it is made available to people taking decisions at the precise point that they need it. This observation highlights the need for data to be integrated into systems and digital estates instead of simply being bound to an analytical hub.

 
So some food for thought from Wayne and his associates. The points they make (including those which I haven’t featured in this article) are serious and well-thought-out ones. It will be interesting to see how things have moved on by the beginning of 2018.
 


 
Notes

 
[1]
 
According to WikiQuotes, this has most famously been attributed to Danish theoretical physicist and father of Quantum Mechanics, Niels Bohr (in Teaching and Learning Elementary Social Studies (1970) by Arthur K. Ellis, p. 431). However it has also been ascribed to various humourists, the Danish poet Piet Hein: “det er svært at spå – især om fremtiden” and Danish cartoonist Storm P (Robert Storm Petersen). Perhaps it is best to say that a Dane made the comment and leave it at that.

Of course similar words have also been said to have been originated by Yogi Berra, but then that goes for most malapropisms you could care to mention. As Mr Berra himself says “I really didn’t say everything I said”.

 
[2]
 
See Trends in Business Intelligence. I have to say that several of these have come to pass, albeit sometimes in different ways to the ones I envisaged back then.
 
[3]
 
For a brief review of what is necessary see What should companies consider before investing in a Business Intelligence solution?
 
[4]
 
I wrote about the unpleasant side effects of a Change Programmes unfettered by appropriate Data Governance in Bumps in the Road, for example.
 
[5]
 
I describe such a set of processes in Data Management as part of the Data to Action Journey.
 
[6]
 
I explore some simmilar territory to that presented by Eckerson Group in Data Visualisation – A Scientific Treatment.
 
[7]
 
My favourite counterexample is provided by The Bank of England.

The Old Lady of Threadneedle Street is clearly not a witch
An inflation prediction from The Bank of England
Illustrating the fairly obvious fact that uncertainty increases in proportion to time from now.
 
[8]
 
This is an area I cover in An Inconvenient Truth.
 
[9]
 
I cover this assertion more fully in A bad workman blames his [Business Intelligence] tools.

 

 

20 Risks that Beset Data Programmes

Data Programme Risks

This article draws extensively on elements of the framework I use to both highlight and manage risks on data programmes. It has its genesis in work that I did early in 2012 (but draws on experience from the years before this). I have tried to refresh the content since then to reflect new thinking and new developments in the data arena.
 
 
Introduction

What are my motivations in publishing this article? Well I have both designed and implemented data and information programmes for over 17 years. In the majority of cases my programme work has been a case of executing a data strategy that I had developed myself [1]. While I have generally been able to steer these programmes to a successful outcome [2], there have been both bumps in the road and the occasional blind alley, requiring a U-turn and another direction to be selected. I have also been able to observe data programmes that ran in parallel to mine in different parts of various organisations. Finally, I have often been asked to come in and address issues with an existing data programme; something that appears to happens all too often. In short I have seen a lot of what works and what does not work. Having also run other types of programmes [3], I can also attest to data programmes being different. Failure to recognise this difference and thus approaching a data programme just like any other piece of work is one major cause of issues [4].

Before I get into my list proper, I wanted to pause to highlight a further couple of mistakes that I have seen made more than once; ones that are more generic in nature and thus don’t appear on my list of 20 risks. The first is to assume that the way that an organisation’s data is controlled and leveraged can be improved in a sustainable way by just kicking off a programme. What is more important in my experience is to establish a data function, which will then help with both the governance and exploitation of data. This data function, ideally sitting under a CDO, will of course want to initiate a range of projects, from improving data quality, to sprucing up reporting, to establishing better analytical capabilities. Best practice is to gather these activities into a programme, but things work best if the data function is established first, owns such a programme and actively partakes in its execution.

Data is for life...

As well as the issue of ongoing versus transitory accountability for data and the undoubted damage that poorly coordinated change programmes can inflict on data assets, another driver for first establishing a data function is that data needs will always be there. On the governance side, new systems will be built, bought and integrated, bringing new data challenges. On the analytical side, there will always be new questions to be answered, or old ones to be reevaluated. While data-centric efforts will generate many projects with start and end dates, the broad stream of data work continues on in a way that, for example, the implementation of a new B2C capability does not.

The second is to believe that you will add lasting value by outsourcing anything but targeted elements of your data programme. This is not to say that there is no place for such arrangements, which I have used myself many times, just that one of the lasting benefits of gimlet-like focus on data is the IP that is built up in the data team; IP that in my experience can be leveraged in many different and beneficial ways, becoming a major asset to the organisation [5].

Having made these introductory comments, let’s get on to the main list, which is divided into broadly chronological sections, relating to stages of the programme. The 10 risks which I believe are either most likely to materialise, or which will probably have the greatest impact are highlighted in pale yellow.
 
 
Up-front Risks

In the beginning

Risk Potential Impact
1. Not appreciating the size of work for both business and technology resources. Team is set up to fail – it is neither responsive enough to business needs (resulting in yet more “unofficial” repositories and additional fragmentation), nor is appropriate progress is made on its central objective.
2. Not establishing a dedicated team. The team never escapes from “the day job” or legacy / BAU issues; the past prevents the future from being built.
3. Not establishing a unified and collaborative team. Team is plagued by people pursuing their own agendas and trashing other people’s approaches, this consumes management time on non-value-added activities, leads to infighting and dissipates energy.
4. Staff lack skills and prior experience of data programmes. Time spent educating people rather than getting on with work. Sub-optimal functionality, slippages, later performance problems, higher ongoing support costs.
5. Not establishing an appropriate management / governance structure. Programme is not aligned with business needs, is not able to get necessary time with business users and cannot negotiate the inevitable obstacles that block its way. As a result, the programme gets “stuck in the mud”.
6. Failing to recognise ongoing local needs when centralising. Local business units do not have their pressing needs attended to and so lose confidence in the programme and instead go their own way. This leads to duplication of effort, increased costs and likely programme failure.

With risk 2 an analogy is trying to build a house in your spare time. If work can only be done in evenings or at the weekend, then this is going to take a long time. Nevertheless organisations too frequently expect data programmes to be absorbed in existing headcount and fitted in between people’s day jobs.

We can we extend the building metaphor to cover risk 4. If you are going to build your own house, it would help that you understand carpentry, plumbing, electricals and brick-laying and also have a grasp on the design fundamentals of how to create a structure that will withstand wind rain and snow. Too often companies embark on data programmes with staff who have a bit of a background in reporting or some related area and with managers who have never been involved in a data programme before. This is clearly a recipe for disaster.

Risk 5 reminds us that governance is also important – both to ensure that the programme stays focussed on business needs and also to help the team to negotiate the inevitable obstacles. This comes back to a successful data programme needing to be more than just a technology project.
 
 
Programme Execution Risks

Programme execution

Risk Potential Impact
7. Poor programme management. The programme loses direction. Time is expended on non-core issues. Milestones are missed. Expenditure escalates beyond budget.
8. Poor programme communication. Stakeholders have no idea what is happening [6]. The programme is viewed as out of touch / not pertinent to business issues. Steering does not understand what is being done or why. Prospective users have no interest in the programme.
9. Big Bang approach. Too much time goes by without any value being created. The eventual Big Bang is instead a damp squib. Large sums of money are spent without any benefits.
10. Endless search for the perfect solution / adherence to overly theoretical approaches. Programme constantly polishes rocks rather than delivering. Data models reflect academic purity rather than real-world performance and maintenance needs.
11. Lack of focus on interim deliverables. Business units become frustrated and seek alternative ways to meet their pressing needs. This leads to greater fragmentation and reputational damage to programme.
12. Insufficient time spent understanding source system data and how data is transformed as it flows between systems. Data capabilities that do not reflect business transactions with fidelity. There is inconsistency with reports directly drawn from source systems. Reconciliation issues arise (see next point).
13. Poor reconciliation. If analytical capabilities do not tell a consistent story, they will not be credible and will not be used.
14. Strong approach to data quality. Data facilities are seen as inaccurate because of poor data going into them. Data facilities do not match actual business events due to either massaging of data or exclusion of transactions with invalid attributes.

Probably the single most common cause of failure with data programmes – and indeed or ERP projects and acquisitions and any other type of complex endeavour – is risk 7, poor programme management. Not only do programme managers have to be competent, they should also be steeped in data matters and have a good grasp of the factors that differentiate data programmes from more general work.

Relating to the other highlighted risks in this section, the programme could spend two years doing work without surfacing anything much and then, when they do make their first delivery, this is a dismal failure. In the same vein, exclusive focus on strategic capabilities could prevent attention being paid to pressing business needs. At the other end of the spectrum, interim deliveries could spiral out of control, consuming all of the data team’s time and meaning that the strategic objective is never reached. A better approach is that targeted and prioritised interims help to address pressing business needs, but also inform more strategic work. From the other perspective, progress on strategic work-streams should be leveraged whenever it can be, perhaps in less functional manners that the eventual solution, but good enough and also helping to make sure that the final deliveries are spot on [7].
 
 
User Requirement Risks

Dear Santa

Risk Potential Impact
15. Not enough up-front focus on understanding key business decisions and the information necessary to take them. Analytic capabilities do not focus on what people want or need, leading to poor adoption and benefits not being achieved.
16. In the absence of the above, the programme becoming a technology-driven one. The business gets what IT or Change think that they need, not what is actually needed. There is more focus on shiny toys than on actionable information. The programme forgets the needs of its customers.
17. A focus on replicating what the organisation already has but in better tools, rather than creating what it wants. Beautiful data visualisations that tell you close to nothing. Long lists of existing reports with their fields cross-referenced to each other and a new solution that is essentially the lowest common denominator of what is already in place; a step backwards.

The other most common reasons for data programme failure is a lack of focus on user needs and insufficient time spent with business people to ensure that systems reflect their requirements [8].
 
 
Integration Risk

Lego

Risk Potential Impact
18. Lack of leverage of new data capabilities in front-end / digital systems. These systems are less effective. The data team is jealous about its capabilities being the only way that users should get information, rather than adopting a more pragmatic and value-added approach.

It is important for the data team to realise that their work, however important, is just one part of driving a business forward. Opportunities to improve other system facilities by the leverage of new data structures should be taken wherever possible.
 
 
Deployment Risks

Education

Risk Potential Impact
19. Education is an afterthought, training is technology- rather than business-focused. People neither understand the capabilities of new analytical tools, nor how to use them to derive business value. Again this leads to poor adoption and little return on investment.
20. Declaring success after initial implementation and training. Without continuing to water the immature roots, the plant withers. Early adoption rates fall and people return to how they were getting information pre-launch. This means that the benefits of the programme not realised.

Finally excellent technical work needs to be complemented with equal attention to business-focussed education, training using real-life scenarios and assiduous follow up. These things will make or break the programme [9].
 
 
Summary.

Of course I don’t claim that the above list is exhaustive. You could successfully mitigate all of the above risks on your data programme, but still get sunk by some other unforeseen problem arising. There is a need to be flexible and to adapt to both events and how your organisation operates; there are no guarantees and no foolproof recipes for success [10].

My recommendation to data professionals is to develop your own approach to risk management based on your own experience, your own style and the culture within which you are operating. If just a few of the items on my list of risks can be usefully amalgamated into this, then I will feel that this article has served its purpose. If you are embarking on a data programme, maybe your first one, then be warned that these are hard and your reserves of perseverance will be tested. I’d suggest leveraging whatever tools you can find in trying to forge ahead.

It is also maybe worth noting that, somewhat contrary to my point that data programmes are different, a few of the risks that I highlight above could be tweaked to apply to more general programmes as well. Hopefully the things that I have learnt over the last couple of decades of running data programmes will be something that can be of assistance to you in your own work.
 


 
Notes

 
[1]
 
For my thoughts on developing data (or interchangeably) information strategies see:

  1. Forming an Information Strategy: Part I – General Strategy
  2. Forming an Information Strategy: Part II – Situational Analysis and
  3. Forming an Information Strategy: Part III – Completing the Strategy

or the CliffsNotes versions of these on LinkedIn:

  1. Information Strategy: 1) General Strategy
  2. Information Strategy: 2) Situational Analysis and
  3. Information Strategy: 3) Completing the Strategy
 
[2]
 
Indeed sometimes an award-winning one.
 
[3]
 
An abridged list would include:

  • ERP design, development and implementation
  • ERP selection and implementation
  • CRM design, development and implementation
  • CRM selection and implementation
  • Integration of acquired companies
  • Outsourcing of systems maintenance and support
 
[4]
 
For an examination of this area you can start with A more appropriate metaphor for Business Intelligence projects. While written back in 2008-9 the content of this article is as pertinent today as it was back then.
 
[5]
 
I cover this area in greater detail in Is outsourcing business intelligence a good idea?
 
[6]
 
Stakeholder

Probably a bad idea to make this stakeholder unhappy (see also Themes from a Chief Data Officer Forum – the 180 day perspective, note [3]).

 
[7]
 
See Vision vs Pragmatism, Holistic vs Incremental approaches to BI and Tactical Meandering for further background on this area.
 
[8]
 
This area is treated in the strategy articles appearing in note [1] above. In addition, some potential approaches to elements of effective requirements gathering are presented in Scaling-up Performance Management and Developing an international BI strategy.
 
[9]
 
Of pertinence here is my trilogy on the cultural transformation aspects of information programmes:

  1. Marketing Change
  2. Education and cultural transformation
  3. Sustaining Cultural Change
 
[10]
 
Something I stress forcibly in Recipes for Success?