Draining the Swamp

Draining the Swamp

The title phrase of this article has entered the collective consciousness from political circles in recent months and years. Readers will be glad to hear that the political commentary content of this piece is precisely zero. Instead I am going to talk about Data Lakes, also referred to pejoratively by those who are not fans as Data Swamps.

Having started my relationship with Data matters back in the early days of Relational Databases and having driven corporate success through Data Warehouses and Business Intelligence, I have also done work in the Big Data arena since around 2013. A central concept in the Big Data paradigm is that of a Data Lake; a large Hadoop repository into which all data that an organisation might want to use is poured, often essentially as is. The thinking is that – in a Big Data implementation – storage is cheap [1] and you never fully know what data you might need in advance, so why not save it all?

It is probably fair to say that – much like many other major programmes of work over the years [2] – the creation of Data Lakes, or perhaps more accurately the leverage of their contents, has resulted in at best mixed results for the organisations that undertake such an endeavour. The thing with mixed results is that it is not all doom and gloom, some people are successful, others are not. The important thing is to determine what are the factors that lead to good and bad outcomes.

Well first of all, I would suggest that – like any other data programme – the formation of a Data Lake is subject to the types of potential issues that I review in my 2017 article, 20 Risks that Beset Data Programmes. Of these, Data Lakes are particularly susceptible to risk 16:

In the absence of [understanding key business decisions], the programme becoming a technology-driven one.

The business gets what IT or Change think that they need, not what is actually needed. There is more focus on shiny toys than on actionable information. The programme forgets the needs of its customers.

The issue here is that some people buy into the misconception that all you have to do is fill the Data Lake and sit back and wait for precious Data gems to flow from it. Understanding a business and its key decisions is tough and perhaps it is not surprising that people would like to skip this step and instead focus on easier activities. Sadly, this approach is not going to work for Data Lakes or anything else.
 


 
Dan Woods

However Data Lakes also face some specific risks and in search of better understanding these, I turned to a recent Forbes article, Can Failed Data Lakes Succeed As Data Marketplaces? penned by Dan Woods (@danwoodsearly) [3]. Dan does not mince words in his introduction:

All over the world, data lake projects are foundering, not because they are not a step in the right direction, but because they are essentially uncompleted experiments.

he adds:

The main roadblock has been that once companies store their data in the data lake, they struggle to find a way to operationalize it. The data lake has never become a product like a data warehouse. Proof of concepts are tweaked to keep a desultory flow of signals going.

and finally states:

[…] for certain use cases, Hadoop and purpose-built data lake-like infrastructure are solving complex and high-value problems. But in most other businesses, the data lake got stuck at the proof of concept stage.

This chimes with my experience – the ability to synthesise and analyse vast troves of data is indispensable in addressing some business problems, but a sledge-hammer to crack a walnut for others. Data Lakes are no more universal panaceas than anything else we have invented to date. As always, the main issues are not technology, but good processes, consistent definitions, improved data quality and matching available data to real business questions.
 


 
Paul Barth

In seeking salvation (Dan’s word) for Data Lakes, he sought the opinion of one of my LinkedIn contacts, Paul Barth (@BarthPS), CEO of Podium Data. Paul analyses the root causes of Data Lake issues, splitting these into three main ones [4]:

  1. Polluted data lakes

    Too many projects targeted at filling or exploiting the Data Lake kick off in parallel. This leads to an incoherent landscape and inaccessible / difficult to understand data.
     

  2. Bottlenecked data lakes

    Essentially treating the Data Lake as if it was a Data Warehouse where the technology is designed for different and less structured purposes. This leads to a quasi-warehouse that is less performant than actual warehouses.
     

  3. Risky data lakes

    Where there is a desire to quickly populate the Data Lake, not least to provide grist to the Data Science mill, appropriate controls on access to data can be neglected; particularly an issue where personally identifiable data is involved. This can lead to regulatory, legal and reputational peril.

Barth’s solution to these problems is the establishment of a Data Marketplace. This is a concept previously referenced on these pages in Predictions about Prediction, a review of consultancy Eckerson Group‘s views on Data and Analytics in 2017 [5]. Back then, Eckerson Group had the following to say about the area:

[An Enterprise Data Marketplace (EDM) is] an Amazon-like data marketplace where analysts can seek datasets, see reviews of others, and select the best-fit datasets for their needs helps to encourage dataset reuse, minimize redundancy, and prevent flawed analysis that results from working with less than ideal data. Data cataloging tools, data curation practices, data preparation technologies, and data services will be combined to create a marketplace for data seekers. Enterprise Data Marketplaces return us to the single-source vision that was once touted as the real benefit of Enterprise Data Warehouses.

Enterprise Data Marketplace

So, as illustrated above, a Data Marketplace is essentially a collection of tagged data sets, which have in some cases been treated to increase consistency and utility, combined with information about their contents and usages. These are overlaid by what is essentially a “social media” layer where “shoppers” can search for data and provide feedback on its utility (e.g. a rating mechanism) and also add their own documentation. This means that useful data sets get highly rated and have more explanatory material attached to them.
 


 
Dave Wells

Eckerson Group build on this concept in their white paper The Rise of the Data Marketplace (opens a PDF document), work commissioned in part by Podium Data. In this Eckerson’s Dave Wells (@_DaveWells_) characterises an Enterprise Data Marketplace as having the following attributes [6]:

  • Categorization organises the marketplace to simplify browsing. For example a shopper seeking budget data doesn’t need to browse through unrelated data sets about customers, employees or other data subjects. Categories complement tagging and smart search algorithms, offering a variety of ways to find data sets.
     
  • Curation is active management of the data sets that are available in the EDM. Curation selects and qualifies data sets, describes each data set, and collects and manages metadata about the collection and each individual data set.
     
  • Cataloging exposes data sets for data shoppers, including descriptions and metadata. The catalog is a view into the inventory of curated data sets. Rich metadata and powereful search are important catalog features.
     
  • Crowdsourcing is the equivalent of a social network for data. Data shoppers actively participate in catloging, curating and categorizing data. This virtuous cycle (a chain of events that reinforces outcomes through a feedback loop) continuously improves the quality and value of data in the marketplace.

Back in the Forbes article, Barth focuses on using the Data Marketplace’s interactive elements to identify the most valuable data (that which is searched for most frequently and has the best shopper rating). This data can then be the subject of focussed investment. Such investment is of the sort familiar in Data Warehouse activities, but it is directed by shoppers’ “social media” preferences rather than more formal requirements gathering exercises.
 


 
Dan Woods makes the pertinent observation that:

So, as the challenge now is not one of technology, but of setting a vision, companies have to decide how to incorporate a new set of requirements to get the most out of their data. […] Even within one company, there may be the need for multiple requirements to be met. Marketing may not need the precision that the accounting department requires. Groups with regulatory mandates may have strong compliance requirements that drive the need for data that is 100% accurate, while those doing exploration for product development purposes may prefer to have larger datasets to work with, and 90% accuracy is all that they require. The data lake must be able to employ multiple approaches as needed by different applications and groups of users.

His article finishes with the following clarion call to implement the Data Marketplace vision:

Companies achieve data transparency with data warehouses because of the use of canonical data models. Yet data in data warehouses was trapped in slow processes that lacked agility. The data warehouse data was well understood but couldn’t evolve at the speed of business. The data lake wasn’t able to correct this problem because companies didn’t implement lakes with a sufficiently comprehensive vision. That’s what they need to do now.


 
"Grimpen Mire"

While when I hear about Data Warehouses that take months to change, poor design and a lack of automation both come to mind, it is unarguable that some Data Warehouses can be plagued by long turn-around times [7]. Equally I have seen enough Data Lakes turn into Grimpen Mire to perceive that there are some major issues inherent in an unmodified approach to this area [8]. The Data Marketplace idea is an intriguing one, a mash-up [9] of different approaches that may just yield some tangible results.

I also think that the inherent focus on users’ needs as opposed to technological considerations is the right way to go. I have been making this point for many years now [10] and have full confidence that I will still be doing so in ten years’ time. As with most aspects of life, it is with people, and how a programme interacts with them, that success and failure factors are most readily found. It seems to me that the Data Marketplace approach seeks to embrace this verity, which can only be a point in its favour.
 


 
Acknowledgements

I would like to thank each of Forbes / Dan Woods, Podium Data / Paul Barth and Eckerson Group / Dave Wells for both reviewing this article and allowing me to quote their work. Such generous behaviour is not as typical as one might like to think and always merits recognition.
 


 
Notes

 
[1]
 
Though the total cost of saving such data extends beyond just disk costs and can become significant.
 
[2]
 
See my earlier article Ever tried? Ever failed? for a treatment of what is clearly a fundamental physical constant – that 60- 70% of all types of major programmes don’t fully achieve their objectives (aka fail). Data Lakes appear to also be governed by this Law of Nature.
 
[3]
 
You may need to navigate past a Forbes banner screen before you can access the actual article.
 
[4]
 
The following is my take in Paul’s analysis, for his actual words, see the Forbes article.
 
[5]
 
Watch this space for a review of Eckerson Group’s predictions for 2018.
 
[6]
 
Which I reproduce with permission.
 
[7]
 
By way of contrast, warehouses that my teams have built have been able to digest acquisitions and meet new and onerous regulatory requirements in a matter of weeks, not months.
 
[8]
 
I should stress here a difference between Data Lakes, which seek to be all-embracing, and more focussed Big Data activities, e.g. the building of complex seismological or meteorological models to assess catastrophic insurance risk (see Hurricanes and Data Visualisation: Part II – Map Reading). I have helped the latter to be very successful myself and seen good results in other organisations.
 
[9]
 
Do people still say “mash-up”?
 
[10]
 
For example in my 2008 trilogy:

  1. Marketing Change
  2. Education and cultural transformation
  3. Sustaining Cultural Change

 

From: peterjamesthomas.com, home of The Data and Analytics Dictionary

 

Predictions about Prediction

2017 the Road Ahead [Borrowed from Eckerson Group]

   
“Prediction and explanation are exactly symmetrical. Explanations are, in effect, predictions about what has happened; predictions are explanations about what’s going to happen.”

– John Rogers Searle

 

The above image is from Eckerson Group‘s article Predictions for 2017. Eckerson Group’s Founder and Principal Consultant, Wayne Eckerson (@weckerson), is someone whose ideas I have followed on-line for several years; indeed I’m rather surprised I have not posted about his work here before today.

As was possibly said by a variety of people, “prediction is very difficult, especially about the future” [1]. I did turn my hand to crystal ball gazing back in 2009 [2], but the Eckerson Group’s attempt at futurology is obviously much more up-to-date. As per my review of Bruno Aziza’s thoughts on the AtScale blog, I’m not going to cut and paste the text that Wayne and his associates have penned wholesale, instead I’d recommend reading the original article.

Here though are a number of points that caught my eye, together with some commentary of my own (the latter appears in italics below). I’ll split these into the same groups that Wayne & Co. use and also stick to their indexing, hence the occasional gaps in numbering. Where I have elided text, I trust that I have not changed the intended meaning:
 
 
Data Management

Data Management

1. The enterprise data marketplace becomes a priority. As companies begin to recognize the undesirable side effects of self-service they are looking for ways to reap self-service benefits without suffering the downside. […] The enterprise data marketplace returns us to the single-source vision that was once touted as the real benefit of Enterprise Data Warehouses.
  I’ve always thought of self-service as something of a cop-out. It tends to avoid data teams doing anything as arduous (and in some cases out of their comfort zone) as understanding what makes a business tick and getting to grips with the key questions that an organisation needs to answer in order to be successful [3]. With this messy and human-centric stuff out of the way, the data team can retreat into the comfort of nice orderly technological matters or friendly statistical models.

However, what Eckerson Group describe here is “an Amazon-like data marketplace”, which it seems to me has more of a chance of being successful. However, such a marketplace will only function if it embodies the same focus on key business questions and how they are answered. The paradigm within which such questions are framed may be different, more community based and more federated for example, but the questions will still be of paramount importance.

 
3.
 
New kinds of data governance organizations and practices emerge. Long-standing, command-and-control data governance practices fail to meet the challenges of big data and of data democratization. […]
  I think that this is overdue. To date Data Governance, where it is implemented at all, tends to be too police-like. I entirely agree that there are circumstances in which a Data Governance team or body needs to be able to put its foot down [4], but if all that Data Governance does is police-work, then it will ultimately fail. Instead good Data Governance needs to recognise that it is part of a much more fluid set of processes [5], whose aim is to add business value; to facilitate things being done as well as sometimes to stop the wrong path being taken.

 
Data Science

Data Science

1. Self-service and automated predictive analytics tools will cause some embarrassing mistakes. Business users now have the opportunity to use predictive models but they may not recognize the limits of the models themselves. […]
  I think this is a very valid point. As well as not understanding the limitations of some models [6], there is not widespread understanding of statistics in many areas of business. The concept of a central prediction surrounded by different outcomes with different probabilities is seldom seen in commercial circles [7]. In addition there seems to be a lack of appreciation of how big an impact the statistical methodology employed can have on what a model tells you [8].

 
Business Analytics

Business Analytics

1. Modern analytic platforms dominate BI. Business intelligence (BI) has evolved from purpose-built tools in the 1990s to BI suites in the 2000s to self-service visualization tools in the 2010s. Going forward, organizations will replace tools and suites with modern analytics platforms that support all modes of BI and all types of users […]
  Again, if it comes to fruition, such consolidation is overdue. Ideally the tools and technologies will blend into the background, good data-centric work is never about the technology and always about the content and the efforts involved in ensuring that it is relevant, accurate, consistent and timely [9]. Also information is often of most use when it is made available to people taking decisions at the precise point that they need it. This observation highlights the need for data to be integrated into systems and digital estates instead of simply being bound to an analytical hub.

 
So some food for thought from Wayne and his associates. The points they make (including those which I haven’t featured in this article) are serious and well-thought-out ones. It will be interesting to see how things have moved on by the beginning of 2018.
 


 
Notes

 
[1]
 
According to WikiQuotes, this has most famously been attributed to Danish theoretical physicist and father of Quantum Mechanics, Niels Bohr (in Teaching and Learning Elementary Social Studies (1970) by Arthur K. Ellis, p. 431). However it has also been ascribed to various humourists, the Danish poet Piet Hein: “det er svært at spå – især om fremtiden” and Danish cartoonist Storm P (Robert Storm Petersen). Perhaps it is best to say that a Dane made the comment and leave it at that.

Of course similar words have also been said to have been originated by Yogi Berra, but then that goes for most malapropisms you could care to mention. As Mr Berra himself says “I really didn’t say everything I said”.

 
[2]
 
See Trends in Business Intelligence. I have to say that several of these have come to pass, albeit sometimes in different ways to the ones I envisaged back then.
 
[3]
 
For a brief review of what is necessary see What should companies consider before investing in a Business Intelligence solution?
 
[4]
 
I wrote about the unpleasant side effects of a Change Programmes unfettered by appropriate Data Governance in Bumps in the Road, for example.
 
[5]
 
I describe such a set of processes in Data Management as part of the Data to Action Journey.
 
[6]
 
I explore some simmilar territory to that presented by Eckerson Group in Data Visualisation – A Scientific Treatment.
 
[7]
 
My favourite counterexample is provided by The Bank of England.

The Old Lady of Threadneedle Street is clearly not a witch
An inflation prediction from The Bank of England
Illustrating the fairly obvious fact that uncertainty increases in proportion to time from now.
 
[8]
 
This is an area I cover in An Inconvenient Truth.
 
[9]
 
I cover this assertion more fully in A bad workman blames his [Business Intelligence] tools.